Version 1.4 of ai05s/ai05-0047-1.txt

Unformatted version of ai05s/ai05-0047-1.txt version 1.4
Other versions for file ai05s/ai05-0047-1.txt

!standard G.3.1          07-08-02 AI05-0047-1/03
!standard G.3.2
!class binding interpretation 07-04-04
!status work item 07-04-04
!status received 07-04-04
!priority Medium
!difficulty Easy
!qualifier Clarification
!subject Annoyances in the array packages
!summary
TBD
!question
I am using this AI to record a number of annoyances or bugs that I encountered during the implementation of the array packages.
1 - The multiplication of matrices is defined to "involve inner products" (G.3.1(56/2)).
In strict mode, this requires that each component of the result comply with the requirements of G.3.1(83/2-84/2). Technically this is known as a "componentwise" error bound. An algorithm that gives a componentwise error bound is necessarily cubic (i.e., in O(N**3)). There exist however algorithms for multiplying matrices that are sub-cubic (e.g., Strassen's method) but these algorithms give normwise error bounds, meaning that the error on one component may spill over other components. Some variants of BLAS and other widely-used linear algebra libraries use fast matrix multiplication algorithms, so the accuracy requirements makes it impossible to implement Generic_Real_Arrays by interfacing to these libraries.
2 - The definition of Eigensystem does not impose any constraint on the length of the
out parameters Values and Vectors. Instead it has the mysterious sentence "The index ranges of the parameter Vectors are those of A". It is written as if Eigensystem had a way to change the constraints of Vectors, which is evidently false. It would seem that it should require that Values'Length, Vectors'Length(1) and Vectors'Length(2) be all equal and equal to A'Length(1).
3 - For some matrices, the QR iteration used to compute the eigenvalues will just not
converge (no deflation will happen). While it is always possible to let it run forever, it is typical in this case to give up after some number of iterations. The RM makes no provision for raising an exception. The values computed for (some of) the eigenvalues may be really bogus, so it would be better to raise an exception than to return garbage to the user.
4 - The index subtype for types Real_Vector and Real_Matrix is Integer. Presumably this was
intended to provide maximum flexibility in selecting the index range. It has however unpleasant consequences. Consider:
Identity: constant Real_Matrix := ((1.0, 0.0), (0.0, 1.0));
Anyone trying to evaluate Identity'First - 1 won't like the result. Why would anyone do that? Maybe to initialize a variable that will be used to iterate through the rows or column of the matrix. This is sure to bite many users, and using a slightly narrower index subtype would have been much wiser.
5 - G.3.2(75/2, 76/2) defines a function "abs" that returns the Hermitian L2-norm of a
vector. However, the specification of this function is given as:
function "abs" (Right : Complex_Vector) return Complex;
The norm of a vector is always a (nonnegative) real number, so it doesn't make much sense to return a complex number here. This function should return a Real'Base. As a matter of fact it did in AI95-00418, but this AI was apparently incorrectly merged into the RM and Amendment.
6 - Section G.3.2 keeps talking about inner product, but never defines exactly what is meant
by this term. This is significant because in a complex vector space the natural inner product is the Hermitian one, where the elements of the second vector are conjugated. It is unclear if the function "*" conjugates the elements of Right. ISO/IEC 13813 explicitly specified that "no complex conjugation is performed". While the "* operator defined by such a rule is not a true inner product, it is probably more appropriate in practice as it makes the conjugations explicit in the source: the user has to write X * Conjugate (Y) which mimics the mathematical notation where conjugation is always made explicit. At any rate, a clarification would be useful.
7 - The eigenvectors returned by Eigensystem in Generic_Real_Arrays
and Generic_Complex_Arrays are not well specified, and this has unfortunate effects. One problem is that the result of these subprograms is hard to test. A more serious problem is that different implementations may return different eigenvectors, and this may cause portability problems. Also, even in a single implementation, changes to the internal algorithms, or to the code generated for the annex G generics, may cause the result of Eigensystem to change. This non-determinism is unpleasant.
Consider first the real case. Given a set of orthonormal eigenvectors, changing the signs of any subset of these vectors also result is a set of orthonormal eigenvectors (this is easily seen geometrically: it amount to changing the direction of the vectors in an orthonormal basis). It would be nice to be more specific.
Consider now the complex case. Things are more nasty here because there is much more freedom: multiplying each vector by a complex number of modulus 1 leaves the set of eigenvectors orthonormal. Again, it would be nice to lift the ambiguity.
!recommendation
!wording
!discussion
!ACATS test
!appendix

[Summary of private mail of July 2007; used by demand, er, permission.]

Pascal Leroy:

It appears that the eigenvectors returned by Eigensystem in
Generic_Real_Arrays and Generic_Complex_Arrays are not well specified, and
this has unfortunate effects.  One problem is that the result of these
subprograms is hard to test.  A more serious problem is that different
implementations may return different eigenvectors, and this may cause
portability problems.  Also, even in a single implementation, changes to
the internal algorithms, or to the code generated for the annex G
generics, may cause the result of Eigensystem to change.  This
non-determinism is unpleasant.

Consider first the real case.  If V1, V2, ..., Vn are eigenvectors that
are mutually orthonormal, then changing the signs of any subset of these
vectors also result is a set of eigenvectors that are mutually orthonormal
(this is easily seen geometrically: it amount to changing the direction of
the vectors in an orthonormal basis).  It would have been nice to be more
specific, for instance by requiring the first nonzero component of each
vector to be positive.

Consider now the complex case.  Things are more nasty here because you
have much more freedom: you can multiply each vector by a complex number
of modulus 1, and still have an orthonormal set.  One way to lift the
ambiguity would be to require that the first nonzero component of each
vector be a positive real.  But this would require a lot of complex
divisions, which would degrade the accuracy somewhat.  Not sure what to do
here.

John Barnes:

Another source of difficulty is if two or more eigenvalues are the same.
This results in a whole subspace of  vectors and any orthogonal set in that
subspace will do. I don't see how one could overcome that easily either.

Pascal Leroy:

True, but it's hard to be excited about this.  Even if the matrix has two
mathematically identical eigenvalues, it's hard to believe that the
numerical algorithm used will find the floating-point values of the
eigenvalues be exactly identical, so the eigenvectors won't form a whole
subspace after all.  If, conversely, it turns out that the matrix didn't
have mathematically identical eigenvalues, but the numerical algorithm
produces two identical floating-point values, then the whole thing is
probably ill-conditioned, so you get what you get.

****************************************************************

From: John Barnes
Sent: Friday, August 3, 2007  2:17 AM

Perhaps you would feel happier about eigenvalues with an example of their
use in nature. They turn up in all sorts of physical situations. Perhaps the
simplest is the moment of inertia of a solid object such as the earth. Now
the earth is a slightly flattened sphere. As a consequence its moment of
inertia about the polar axis is more than the moment of inertia about an
axis going through the equator. Moreover, it will spin smoothly about any of
these axes. But if you atttempt to spin it about an arbitrary axis like an
axis through Madison, Paris or London then it will wobble because it is not
symmetric about that axis.

The axes where there is no wobble are known as the principal axes. The
moments of inertia about them are the eigenvalues and the axes themselves
are the eigenvectors.

In the case of an oblate sphere like the earth, one principal axis is the
polar axis and any two at right angles through the equator can be taken as
the other axes. That's because two of the eigenvalues are the same. A
general rigid body will have three distinct eigenvalues and no confusion
about the axes.

The eigenvalues are easy and there is never any dispute about them. The
problem is the eigenvectors because for one thing you can take them in
either direction. From North to South pole or vice versa for example. And if
two or more eigenvalues coincide then the axes can be chosen in lots of
different ways.

These eigensystems turn up in lots of physical situations. One other that
immediately springs to mind is in elasticity. Something like a block of wood
has a different elasticity along the grain and across the grain.

They also turn up in statistics when estimating several parameters.

The complex number stuff turns up in quantum mechanics.

Her endeth the first lesson.

---

Another example I should have mentioned is an aeroplane propeller. If it has
just two blades then the three moments of inertia (three eigenvalues) are
different. I believe this causes vibration when turning. A four bladed
propellor has two eigenvalues the same and no vibration. A three bladed
propellor is OK as well perhaps surprisingly.

---

> I for one cannot think of a practical application of complex eigenvalues in
> finite-dimensional spaces.

Got it - I knew it was something about angular momentum.

The Pauli spin matrices are Hermitian. See for example Penrose, The Road to
Reality page 550-551. The matrices are (using good old Ada aggregates)

L1 = ((0, 1), (1, 0))
L2 = ((0, -i), (i, 0))
L3 = ((1, 0), (0, -1))

OK so L1 and L3 are only real, But L2 is the proper thing. It is about the
simplest posssible Hermitian matrix.

The eigenvalues are the possible spin values, which are in fact real
(half-integer, really).

****************************************************************

Questions? Ask the ACAA Technical Agent