Samples for JSpline+

Contents

1  Scope
2  Unified Data Access
3  Matrix Algebra
4  Solving of SLAE
    4.1  Comparison of Square Solvers
    4.2  Comparison of Common Solvers
    4.3  Common Solvers' Speed Benchmark
    4.4  Inversion of the Hilbert Matrix
    4.5  Test for General Green Solver
    4.6  Test for Positive Definite Symmetric Banded Matrix Solver
5  Splines



1  Scope

The document describes samples for JSpline+ classes, samples.zip. The main aims of these samples are:

The directory tree of the archive repeats the structure of JSpline+'s packages: samples concerning to matrices are stored in the matrix subdirectory; samples for solving of linear systems are stored in the matrix/solve subdirectory, and so on. To simplify the use, all sample classes are described in the default package. So, to run a sample, you must set the CLASSPATH variable by the following manner:

set CLASSPATH=.;JSpline+'s path\jspline.jar



2  Unified Data Access

[To be written]



3  Matrix Algebra

[To be written]



4  Solving of SLAE

Directory: matrix/solve

Contents:  AccuracyOfCommonSolvers
           AccuracyOfSquareSolvers
           SolversSpeedBenchmark
           TestHibertMatrix
           TestOfChoBandSolver
           TestOfGGreenSolver



4.1  Comparison of Square Solvers

The AccuracyOfSquareSolvers class compares 3 solvers of ReaSquareSolver type for accuracy of the solution. Test n×n-matrix B is generated in the form B = ATA, where A is a matrix having random entries from the interval (-1,1). The matrix B is symmetric and positive definite. So, all square solvers may be used for solving of SLAE.

The strict solution x is also generated randomly with the same distribution, and the right-hand y is calculated. The system Bz = y is solved 3 times with different square solvers and the relative error ||z-x||/||x|| is calculated in various vector norms (we used the max-norm, the euclidian norm, and the sum-norm implemented in the RealVector class).

The table below shows the euclidian relative errors of compared algorithms for different matrix sizes (the behaviour of relative errors in other norms is analogous). Note that it has no sense to compare error for different matrices, because their condition numbers vary in wide range. The comparison of errors shown in the same row is only legal.

n CholeskySolver GaussSolver CrautSolver
100 4.163E-13 4.641E-13 3.912E-13
100 1.459E-11 1.362E-11 2.333E-11
200 0.670E-12 0.638E-12 1.085E-12
200 0.214E-11 1.252E-11 1.391E-11
500 2.760E-08 3.395E-08 5.009E-08
500 1.717E-11 2.292E-11 2.659E-11

Theoretically, the Cholesky method is most strict for positive definite symmetric matrices. This is really satisfied on the tests shown. The pivoting used in the Craut method is not necessary in this case. It leads to decreasing of the accuracy.

The Cholesky method is 2 times faster than Gauss and Craut ones. But it may be used for positive definite symmetric matrices only. If a matrix doesn't satisfy these restrictions and it is non-singular, we recommend to use the Craut method.



4.2  Comparison of Common Solvers

The AccuracyOfCommonSolvers class compares 3 solvers of ReaCommonSolver type for accuracy of the solution. Test n×n-matrix A is generated with random entries from the interval (-1,1).

The strict solution x is also generated randomly with the same distribution, and the right-hands y = Ax and z = ATx are calculated. These systems are solved with different common solvers and the relative error ||z-x||/||x|| is calculated in the euqlidian norm.

The table below shows the euclidian relative errors of compared algorithms for different matrix sizes for Ax = y. Note that it has no sense to compare error for different matrices, because their condition numbers vary in wide range. The comparison of errors shown in the same row is only legal.

n RotationSolver ReflectionSolver EliminationSolver
10 3.275E-15 0.693E-15 2.447E-15
10 1.104E-15 1.964E-15 3.162E-15
100 1.006E-14 1.972E-14 3.856E-14
100 0.976E-14 1.364E-14 1.158E-14
200 1.036E-13 0.992E-13 1.630E-13
200 3.387E-14 2.261E-14 1.808E-14
500 2.116E-13 1.689E-13 3.741E-13
500 3.754E-14 1.831E-14 4.548E-14

As you can see, the accuracy of the EliminationSolver is usually worst of all (especially for large matrices), but it is the most fastest solver from others. The RotationSolver and ReflectionSolver show close results, and the stroke to use one of them should be also based on their speed (see Section 4.3 for more details).



4.3  Common Solvers' Speed Benchmark

The SolversSpeedBenchmark class compares 3 solvers of ReaCommonSolver type for speed. It compares factorization and solution calculation time. To provide more exact speed benchmarks the factorization is repeated many times and the average factorization time is calculated. The same scheme is used for the solution calculation time.

We have compared the speed of solvers on PC AMD-K6(TM)-2/266 MHz with 128 Mb memory under Windows NT4. Three Sun's VM compilers were used for comparison:

  1. Classic VM with JIT (build JDK-1.2.2-W, native threads, symcjit);
  2. Java HotSpot(TM) Server VM (1.0.1, mixed mode, build g);
  3. Java HotSpot(TM) Client VM (build 1.3.0rc1-S, mixed mode).

The table below shows the speed benchmarks for different VM's (pointed by numbers 1, 2, 3), matrix sizes, repeat counts (number of repeated calculations used for time averaging), and solvers. Every time benchmark is shown as a sum of the average factorization time and average solution time. The time is measured in seconds.

VM Matrix Size Repeat Count RotationSolver ReflectionSolver EliminationSolver
1 100 40 0.081 + 0.005 0.091 + 0.002 0.044 + 0.001
2 100 40 0.167 + 0.009 0.182 + 0.004 0.106 + 0.002
3 100 40 0.129 + 0.006 0.096 + 0.002 0.046 + 0.001
1 200 20 0.614 + 0.022 0.745 + 0.008 0.374 + 0.005
2 200 20 1.229 + 0.039 1.382 + 0.015 0.787 + 0.011
3 200 20 1.021 + 0.026 0.754 + 0.008 0.341 + 0.004
1 500 10 9.763 + 0.151 22.210 + 0.076   8.801 + 0.030
2 500 10 18.840 + 0.257   30.932 + 0.122   15.174 + 0.069  
3 500 10 15.692 + 0.176   21.913 + 0.077   8.856 + 0.030

As you can see, the JDK.1.2.2 JIT Compiler has the best performance, and the HotSpot Server VM loses in performance. The JDK1.3 HotSpot Client VM is good enough. Its performance is the same as JIT for all solver except the RotationSolver.

The comparison of algorithms shows that the EliminationSolver's speed is higher of all as for factorization so as for solving of SLAE. It is unexpected, that the RotationSolver wins the ReflectionSolver by speed, because the theoretical time ratio for the RotationSolver, ReflectionSolver, and EliminationSolver is 3:2:1. Only JDK1.3 HotSpot shows this ratio for non-large matrices.



4.4  Inversion of the Hilbert Matrix

The TestHilbertMatrix class tests the accuracy of CholeskySolver on the Hilbert Matrix of the 7th order. The matrix is scaled in 360,360 times to provide the calculation of its entries without errors. The resulting formula for the calculation of the matrix entries is the following: Aij = 360360/(i+j+1). The strict solution vector x consists of units (we follow Wilkinson and Reinsch in this test).

The condition number of this matrix is 0.5e-9, so, waited calculation error is near 1e-7. We compare solve and inversion methods with iterative refined solutions. The table below shows the results.

Ordinary Refined
Solve 7.0023E-9 4.9240E-9
Invert 1.275E-14 0.757E-14



4.5  Test for General Green Solver

The TestOfGGreenSolver class compares the accuracy of GeneralGreenSolver with accuracy of other solvers. The special block linear system

æ
ç
è
C
B
BT
O
ö
÷
ø
æ
ç
è
x
y
ö
÷
ø
= æ
ç
è
b
c
ö
÷
ø
is solved. Here C is m×m-matrix and B is m×k-matrix. The solution vectors x and y have m and k variables respectively. The right-hand side vectors b and c also consist of m and k variables. In the test, the matrices and the solution are generated randomly with uniform distribution from the interval (-1,1).

The table below shows the euclidian relative errors of compared algorithms for different m and k. Note that it has no sense to compare error for different matrices, because their condition numbers vary in wide range. The comparison of errors shown in the same row is only legal.

m k GeneralGreenSolver ReflectionSolver CrautSolver
100 50 0.429E-12 1.217E-12 0.760E-12
100 50 1.056E-13 0.191E-13 0.223E-13
200 100 0.421E-13 0.355E-13 2.433E-13
200 100 3.908E-14 1.359E-14 8.905E-14
500 200 0.177E-12 0.594E-12 1.503E-12
500 200 1.079E-13 2.671E-13 4.043E-13

The GeneralGreenSolver shows the best accuracy for large matrices. Its accuracy may be improved if to replace its internal Craut type solver to more accurate one based on rotations or reflections.

Theoretically, the speed of the GeneralGreenSolver is near to the CrautSolver's speed.



4.6  Test for Positive Definite Symmetric Banded Matrix Solver

The TestOfChoBandSolver class solves SLAEs with two special positive definite symmetric banded matrices. The first matrix is 3-diagonal. It is the Toeplitz matrix with entries ai,i = 2 and ai,i+1 = ai,i-1 = -1. The second matrix is the square of the first one. Its entries: ai,i = 6, ai,i+1 = ai,i-1 = -4, ai,i+2 = ai,i-2 = -1 expect a0,0 = an-1,n-1 = 5, where n is the matrix size. The strict solution vector x consists of units. The table below shows the euclidian relative errors for different matrix dimensions.

n A A2
100 8.792E-15 5.457E-11
1000 3.363E-13 1.687E-07
10000 1.368E-11 4.312E-04

The condition number is (2n/p)2 for the first matrix and (2n/p)4 for the last one. The relative error must increase 100 times and 10,000 times respectively if the matrix size increases in 10 times.



5  Splines

[To be written]


File translated from TEX by TTH, version 2.25.
On 08 Feb 2001, 16:34.