HI-SPEED Java Classes and Examples

A few reminders for interested users who want to try out these examples.

a. Make sure hispeed.jar is within the classpath if you are running from command line. If you are using Netbeans, make sure hispeed.jar is added to the "Libraries" through "Add JAR/Folder" option, see Getting Started for more details.

b. First, remember to import the relevant java classes.

c. Add the specific method in the main method and click "Run File".

For example, here is a simple example on how to use the AdaptiveQuadrature class. This example is coded as a method called adaptiveQuadratureExample and is called within the main method of the HISPEEDExamples class.

If you want to run other examples, you can simply (1) add the example method in the HISPEEDExamples class and then either replace the statement on "HISPEEDExamples.adaptiveQuadratureExample();" or add a whole new statement related to the example in the main method.

--------------------------------------------------------------------------------------------------

import cgk.AdaptiveQuadrature;

import cgk.MAT;

import cgk.OneVariableFunction;

public class HISPEEDExamples {

//Calling the adaptiveQuadratureExample in the main method

public static void main(String[] args) {

HISPEEDExamples.adaptiveQuadratureExample();

}

public static void adaptiveQuadratureExample(){

OneVariableFunction ovf = new OneVariableFunction(){

public double valueAt(double x) {

return 1.0/Math.exp(0.5*x*x);

}

};

double from = -100.0;

double to = 2.0;

double tol = 1.0e-15;

int maxRecursion = 20;

double[] results = AdaptiveQuadrature.integrate(ovf, from, to, tol, maxRecursion);

System.out.println(MAT.toString(results));

}//end adaptiveQuadratureExample

}//end HISPEEDExamples

----------------------------------------------------------------------------------------------------

EXAMPLES

1. AdaptiveQuadrature class

This is a Java class implementing an adaptive integration method. The method can integrate a single real variable function. If the limits of integration is negative or positive infinity, you will have to perform a change of variables so that the new variable has finite limits of integration. In some cases such as the one shown below, one of the limits of integration can be made finite few affecting the value of the integral because the value of the integral beyond the interval of interest is too small to affect the overall value of the integral.

Implementation details can be found on page 92 in G. E. Forsythe, M. A. Malcolm, and C. B. Moler, Computer methods for mathematical computations. Englewood Cliffs: Prentice-Hall, 1977.

You need to import the following classes to run this example.

import cgk.AdaptiveQuadrature;

import cgk.MAT;

import cgk.OneVariableFunction;

//-----------------------------------------------------------------------------------------------------

public static void adaptiveQuadratureExample(){

//Step 1 define a function using the OneVariableFunction interface.

OneVariableFunction ovf = new OneVariableFunction(){

public double valueAt(double x) {

return 1.0/Math.exp(0.5*x*x); //we use the standard normal probability density function for this example

}

};

double from = -100.0; //lower limit

double to = 2.0; //upper limit

double tol = 1.0e-15; //the desired tolerance of the error estimate

int maxRecursion = 20; //the maximum number of recursions desired

double[] results = AdaptiveQuadrature.integrate(ovf, from, to, tol, maxRecursion); //Return the value of the integral and the error estimate

System.out.println(MAT.toString(results)); // Print the results.

}//end adaptiveQuadratureExample

//-----------------------------------------------------------------------------------------------------

The printout should be: { 2.4496021506381083, 4.059396448142163E-17};

2. AnalyticallyExactSpiralScheme class

This is a Java class implementing the analytically exact spiral scheme for distributing points uniformly on the unit sphere. This is a deterministic method. Given the desired number of points to be distributed on the unit sphere, this method computes the correct spacing between adjacent spirals of constant slope in such a manner that the spacing is also the same as the spacing between any two consecutive points on the spirals. Stand-alone software with GUI has also been created to ease the use of the method and can be downloaded from here.

Implementation details can be found in Koay CG. Analytically exact spiral scheme for generating uniformly distributed points on the unit sphere

. Journal of Computational Science 2011; 2: 88-91.

You need the following classes:

import cgk.MAT;

import cgk.pointset.AnalyticallyExactSpiralScheme;

//----------------------------------------------------------------------------------------------------

public static void analyticallyExactSpiralSchemeExample(){

int n = 6; //Number of points on the unit sphere

double[][] pts = AnalyticallyExactSpiralScheme.getPointSetInCartesianCoordinates(n);

System.out.println("pts = "+MAT.toString(pts));

}//end analyticallyExactSpiralSchemeExample

//----------------------------------------------------------------------------------------------------

The printout should be:

pts = {{ -0.24365145382711215, 0.41876217641920244, 0.8747983817135165},

{ -0.4202358705401975, -0.735860024821704, 0.5309537051200637},

{ 0.8597402891737448, -0.4789005788825843, 0.17748484644434145},

{ 0.5547931038648712, 0.8128368478277858, -0.17748484644434134},

{ -0.7049634403063146, 0.47022836032162263, -0.5309537051200637},

{ -0.03064572125384735, -0.4835169398482787, -0.8747983817135165}};

3. AntipodalIterativeOptimization class

This is a Java class implementing an iterative optimization technique using multiple strategies to distribute points uniformly on the unit sphere under the antipodal symmetry constraints. It starts with Nelder-Mead simplex method using the deterministic point set as one of the simplices. The minimizer of the Nelder-Mead simplex method is then used in BFGS method to further reduce the modified potential energy of the point set. Internally, it also includes multiple reinitializations but also has the best simplex as one of the simplices.

This class was created to generate the best possible point sets for comparison purposes against those generated from the deterministic method proposed in Koay CG. A simple scheme for generating nearly uniform distribution of antipodally symmetric points on the unit sphere. Journal of Computational Science 2011; 2: 377-381.

Classes that need to be imported:

import cgk.MAT;

import cgk.pointset.AntipodalIterativeOptimization;

//-----------------------------------------------------------------------------------------

public static void antipodalIterativeOptimizationExample(){

int N=10; //Number of points

AntipodalIterativeOptimization iop = new AntipodalIterativeOptimization(N);

//Windows path convention

String DIR_PATH = "C:\\Users\\cgk\\Desktop\\antipodalpointsets\\";

//Linux path convention

//String DIR_PATH = "/home/cgk/antipodalpointsets/";

iop.writeOutTo(DIR_PATH); // write out the point set to a text file

iop.setNumberOfGridPoints(25);

iop.setNumberOfSimplexSeeds(5);

iop.setNumberOfSimplexReinitialization(5);

iop.run();

//solution vector is in spherical coordinates.

double[] solutionVector = iop.getSolution();

System.out.println("minimum potential energy= "+ iop.valueAt(solutionVector));

//The solution vector may be used to investigate the gradient vector and the norm of the gradient vector.

double[] grad = iop.gradientValueAt(solutionVector);

System.out.println("grad vector = "+MAT.toString(grad));

double[][] pts = iop.getOptimalPointSetInCartesianCoordinate();

System.out.println("pts ="+MAT.toString(pts));

}//end antipodalIterativeOptimizationExample

//-------------------------------------------------------------------------------------------------------

Internally, the initial point sets are based on stochastic initialization, and as such the final point set may be different coordinates but the minimum potential energy should be very close from one run to the next. The minimum potential for this example should be close to 150.90747122105196.

The component of the gradient vector of the potential energy function should be small, say around or less than 1.0e-9 for this example.

This method also writes out a text containing

(a) the number of points on the first line,

(b) the minimum potential energy of the point set on the second line, and

(c) the point set in Cartesian coordinates.

The name of the generated text file is based on this convention: "AntipodalPointSetOfNPointsViaIterativeOptimization.txt." The letter "N" takes on the integer value of the assigned number of points.

4. AntipodallySymmetricPointSet class

This Java class implements the deterministic method for generating nearly uniform points on the unit sphere. The method was proposed in Koay CG. A simple scheme for generating nearly uniform distribution of antipodally symmetric points on the unit sphere. Journal of Computational Science 2011; 2: 377-381.

Stand-alone software GUI and the source code can be found here.

Classes that need to be imported:

import cgk.MAT;

import cgk.pointset.AntipodallySymmetricPointSet;

//-----------------------------------------------------------------------------------------

public static void antipodallySymmetricPointSetExample(){

int N=10;

double[][] pts = AntipodallySymmetricPointSet.getPointSetInCartesianCoordinates(N);

System.out.println("pts ="+MAT.toString(pts));

}//end antipodallySymmetricPointSetExample

//-----------------------------------------------------------------------------------------

5. AntipodallySymmetricSphericalSpline class

GCV-Antipodally Symmetric Spherical Spline.

This class implements the GCV-Spherical Spline with truncated Laplace series of spherical harmonics of only even orders: f(theta,phi) = Sum[Sum[beta^{m}_{l} Y^{m}_{l},{m,-l,l}],{l,0,l_max,2}]. l_max must be an even number.

The criterion for selecting optimal smoothing parameter lambda is based on the generalized cross-validation [2-3]. This technique has been used in many applications, e.g., regularization of fiber orientation distributions [4] and the signal-transformational framework for breaking the noise floor in MRI [5]. This class also contains methods for those who are interested Orientation Distribution Function (ODF) reconstruction in analytical Q-ball imaging [6-8], which is an important extension of the original numerical approach of Q-ball was proposed by Tuch et al.[9]. The implementation here is similar to [8] but the smoothing technique is based upon GCV rather than L-curve.

The details of this method is outlined in Appendices B and C of Ref. [5].

Refs:

[1] Ruppert D, Wand MP, Carroll RJ. Semiparametric Regression. New York; Cambridge, 2003.

[2] Craven P, Wahba G. Smoothing noisy data with spline functions: estimating the correct degree of smoothing by the method of generalized cross-validation. Numer. Math. 31: 377-403, 1979.

[3] Golub GH, Heath M, Wahba G. Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics,21(2): 215-223, 1979.

[4] Sakaie KE, Lowe MJ. An objective method for regularization of fiber orientation distributions derived from diffusion-weighted MRI. NeuroImage 2007; 34: 169-176.

[5] Koay CG, Özarslan E and Basser PJ. A signal transformational framework for breaking the noise floor and its applications in MRI. J Magn Reson 2009; 197: 108-119.

[6] Anderson AW. Measurement of fiber orientation distributions using high angular resolution diffusion imaging. Magn Reson Med. 2005; 54: 1194-1206.

[7] Hess CP, Murkherjee P, Han ET, Xu D, Vigneron DB. Q-ball reconstruction of multiple fiber orientations using the spherical harmonic basis. Magn Reson Med. 2006; 56: 104-117.

[8] Descoteaux M, Angelino E, Fitzgibbons S, Deriche R. Regularized, Fast and Robust Analytical Q-ball Imaging. Magn Reson Med. 2007; 58: 497-510.

[9] Tuch DS. Q-ball imaging. Magn Reson Med 2004; 52: 1358-1372.

Classes that need to be imported:

import cgk.MAT;

import cgk.simulations.Tools; // this class is specific to the example used to illustrate the AntipodallySymmetricSpherical Spline class.

import cgk.splines.AntipodallySymmetricSphericalSpline;

//-------------------------------------------------------------

public static void antipodallySymmetricSphericalSplineExample(){

int lmax=4;

int npts = 30;

double[][] SC = new double[npts][2];

//Create the UniformPointSet object to retrieve uniform point set on the unit sphere

UniformPointSet ups = new UniformPointSet(npts);

double[][] dir = ups.getAntipodalPointSet();

double fac = 1.0e-4;

double[] trueGamma = {Math.log(1000.0), 9.475*fac, 6.694*fac, 4.829*fac, 1.123*fac, -0.507*fac, -1.63*fac};

double[] bvalues = { 1000.0};

int NC = 1; //number of receiver coil channels.

double snr = 30;

Tools t = new Tools(dir,bvalues,trueGamma, 1);

System.out.println("noiseless normalized signals = "+ MAT.toString( MAT.multiply( t.noiselessSI(), 1.0/1000.0) ) );

double[] normalizedNoisySI = MAT.multiply( t.noisySI(snr), 1.0/1000.0);

System.out.println("noisy normalized signals = "+ MAT.toString(normalizedNoisySI) );

//get the point set in Cartesian coordinates and convert them to spherical coordinates.

SC = MAT.C2S( dir );

AntipodallySymmetricSphericalSpline asss = new AntipodallySymmetricSphericalSpline(SC,lmax);

asss.evaluate(normalizedNoisySI);

System.out.println("Estimated normalized signals = "+ MAT.toString( asss.getEstimatedY() ) );

System.out.println("Minimum GCV = "+ asss.getMinimumGCV() );

System.out.println("Optimal lambda = "+ asss.getOptimalLambda() );

}//end antipodallySymmetricPointSetExample

//-------------------------------------------------------------

Essentially, you need only provide the spherical coordinates, denoted as SC above, on which the measurements are made and the maximum degree of the spherical harmonics, lmax. Internally, most of the matrix computations are carried out within the constructor except the following. Each time the method, "evaluate", is called, it takes in a measurement vector and performs GCV optimization to the optimal lambda. The number of measurements (or the number of spherical coordinates) should be greater than or equal to (lmax+1)(lmax+2)/2.

In this particular example, the measurement vector was obtained from noisy diffusion weighted signals, which were in turn generated from a known diffusion tensor at signal-to-noise ratio of 30.

6. BalanceComplexMatrix

Please read the next example first. The example used in this class example is very closely related to the next example.

//----------------------------------------------------------------------------------

public static void balanceComplexMatrixExample(){

//The example below is adapted from Cleve Moler's blog on computing matrix exponential

//at URL: http://blogs.mathworks.com/cleve/2012/07/23/a-balancing-act-for-the-matrix-exponential/

// Please first read the example on balancing real matrices, which is the next Java class example.

double a = 2.0e10;

double b = 4.0e8/6.0;

double c = 200.0/3.0;

double d = 3.0;

double e = 1.0e-8;

// C = I A where A is the matrix discussed in Cleve's blog. I=sqrt(-1) is the imaginary number.

Complex[][] C = new Complex[][] {{Complex.ZERO, new Complex(0,e), Complex.ZERO},

{ new Complex(0,-(a+b)), new Complex(0,-d), new Complex(0,a)},

{ new Complex(0,c), Complex.ZERO, new Complex(0,-c)}};

System.out.println("C = "+MAT.toString(C));

BalanceComplexMatrix bm = new BalanceComplexMatrix(C);

double[] Tdiag = bm.getDiagonalScalingMatrix();

Complex[][] B = bm.getBalancedMatrix();

double[][] T = MAT.diagonalMatrix(Tdiag);

double[][] invT = MAT.diagonalMatrix(new double[]{1.0/Tdiag[0],1.0/Tdiag[1],1.0/Tdiag[2]});

System.out.println("B = "+MAT.toString(B));

System.out.println("T = "+MAT.toString(T));

Complex[][] expm = MAT.dot(T, MAT.dot(MAT.matrixExp(B),invT) );

System.out.println("exp(C) = "+ MAT.toString(expm)); // by balancing

System.out.println("exp(C) = "+ MAT.toString(MAT.matrixExp(C))); //direct call

}//end

//------------------------------------------------------------------------------------------------------------

The printout should be as follows:

C = {{ 0.0, 1.0E-8i, 0.0},

{ -2.0066666666666668E10i, -3.0i, 2.0E10i},

{ 66.66666666666667i, 0.0, -66.66666666666667i}};

B = {{ 0.0, 21.47483648i, 0.0},

{ -9.344269831975302i, -3.0i, 18.62645149230957i},

{ 33.333333333333336i, 0.0, -66.66666666666667i}};

T = {{ 9.5367431640625E-7, 0.0, 0.0},

{ 0.0, 2048.0, 0.0},

{ 0.0, 0.0, 1.9073486328125E-6}};

exp(C) = {{ 1.0022454013286795+0.017378170891141242i, -6.2768717997502306E-12-3.4404402908818045E-10i, -0.008696126438511254-0.13395067322678655i},

{ -4.537858684524357E7-2.026228031416285E8i, 1.0041284628686054+0.12059137961759575i, 4.542043265724182E7+2.0491643000221676E8i},

{ 0.14270531575229484+0.5491040934472686i, -2.8987088128372325E-11-4.4650224408928917E-10i, 0.8508149720497435-0.6661230980270036i}};

exp(C) = {{ 1.0022454013286795+0.017378170891141242i, -6.2768717997502306E-12-3.4404402908818045E-10i, -0.008696126438511254-0.13395067322678655i},

{ -4.537858684524357E7-2.026228031416285E8i, 1.0041284628686054+0.12059137961759575i, 4.542043265724182E7+2.0491643000221676E8i},

{ 0.14270531575229484+0.5491040934472686i, -2.8987088128372325E-11-4.4650224408928917E-10i, 0.8508149720497435-0.6661230980270036i}};

7. BalanceMatrix

The relevant reference is B. N. Parlett and C. Reinsch. Balancing a matrix for calculation of eigenvalues and eigenvectors. Numer. Math., 13:293-304, 1969.

Balancing of a matrix is found useful in the computation of matrix exponential as discussed in Cleve's Corner.

//-----------------------------------------------------------------------------------

public static void balanceMatrixExample(){

//The example below is taken from Cleve Moler's blog on computing matrix exponential

//at URL: http://blogs.mathworks.com/cleve/2012/07/23/a-balancing-act-for-the-matrix-exponential/

double a = 2.0e10;

double b = 4.0e8/6.0;

double c = 200.0/3.0;

double d = 3.0;

double e = 1.0e-8;

double[][] A = {{0.0, e, 0.0},{ -(a+b), -d, a},{ c, 0.0, -c}};

BalanceMatrix bm = new BalanceMatrix(A);

double[] Tdiag = bm.getDiagonalScalingMatrix();

double[][] B = bm.getBalancedMatrix();

double[][] T = MAT.diagonalMatrix(Tdiag);

double[][] invT = MAT.diagonalMatrix(new double[]{1.0/Tdiag[0],1.0/Tdiag[1],1.0/Tdiag[2]});

System.out.println("B = "+MAT.toString(B));

System.out.println("T = "+MAT.toString(T));

double[][] expm = MAT.dot(MAT.dot(T,MAT.matrixExp(B)),invT);

System.out.println("exp(A) by balancing = "+ MAT.toString(expm)); // by balancing

System.out.println("exp(A) by direct call = "+ MAT.toString(MAT.matrixExp(A))); //direct call

}//end

//-----------------------------------------------------------------------------------

The printed results should be the following:

B = {{ 0.0, 21.47483648, 0.0},

{ -9.344269831975302, -3.0, 18.62645149230957},

{ 33.333333333333336, 0.0, -66.66666666666667}};

T = {{ 9.5367431640625E-7, 0.0, 0.0},

{ 0.0, 2048.0, 0.0},

{ 0.0, 0.0, 1.9073486328125E-6}};

exp(A) by balancing = {{ 0.44684946828316985, 1.5404415738395117E-9, 0.4628114535587709},

{ -5743067.779479937, -0.015283003868683545, -4526542.712784489},

{ 0.44772297784948933, 1.542704845195904E-9, 0.46348064883764734}};

exp(A) by direct call = {{ 0.44684946828316985, 1.5404415738395117E-9, 0.4628114535587709},

{ -5743067.779479937, -0.015283003868683545, -4526542.712784489},

{ 0.44772297784948933, 1.542704845195904E-9, 0.46348064883764734}};

8. BFGS

9. BHFalseDiscoveryRate

10. Binary

11. Complex

12. ComplexFloat

13. ComplexHouseholderQR

14. ComplexHouseholderVector

15. DataObject

16. DataObjectFloat

17. DistributionalMapping

18. DTIProcessing

19. DTRepresentation

20. FastGeneralSimplex

21. FibonacciSearch

22. FSL2HISPEED

23. GeneralCNLS4DTIFast

24. GeneralNLS4DTIFast

25. GeneralNLS4DTISuperFast

26. GeneralWLLS4DTI

27. GlobalPIESNO

28. GLTFile

29. HouseholderQR

30. HouseholderVector

31. JCholesky

32. JCholeskyComposition

33. JComplexCholesky

34. JComplexCyclicJacobi

35. JComplexCyclicJacobiSVD

36. JComplexModifiedCholesky

37. JConjugateGradient

38. JCyclicJacobi

39. JCyclicJacobiSVD

40. JFourier

41. JLU

42. JModifiedCholesky

43. JPolarDecomposition

44. JQR

45. LSI

LSI is a class for solving the general least squares problem with inequality constraints in the form of ||Ax-b|| subject to Gx ≥ h. The implementation is based on that of Chapter 23 of "Solving Least Squares Problems" by Lawson and Hanson.

The following example is also based on Chapter 23 and Section 7 of the same reference above.

import cgk.LSI;

import cgk.MAT;

//-----------------------------------------------------------------

public static void LSIExample(){

double[][] E = {{0.25, 1},{0.50, 1},{0.5, 1},{0.8, 1}};

double[][] G = {{1,0},{0,1},{-1,-1}};

double[] h = {0,0,-1};

LSI alsi = new LSI(E, G, h);

double[] f = {0.5,0.6,0.7,1.2};

alsi.solve(f);

System.out.println("solution vector (xhat) = " + MAT.toString( alsi.getSolution() ));

System.out.println("residual vector (rhat) = "+MAT.toString(MAT.subtract(f, MAT.dot(E, alsi.getSolution()))));

}

//---------------------------------------------------------------------

The printouts should be:

solution vector (xhat) = { 0.6213151927437639, 0.37868480725623604};

residual vector (rhat) = { -0.03401360544217702, -0.08934240362811796, 0.010657596371882017, 0.3242630385487528};

46. MeanDTRepresentation

47. MedianPIESNO

48. MultiVariableFunction

49. MultiVariableGradientFunction

50. Nifti

51. NiftiDataObject

52. NiftiHeader

53. NonAntipodalIterativeOptimization

54. OneDimensionalSHORESignalTransformationalFramework

55. OneDimensionalSignalTransformationalFramework

56. OneVariableFunction

57. OneVariableGradientFunction

58. OptimalOrderingWithAntipodalSymmetry

59. OptimalOrderingWithNonAntipodalSymmetry

60. OptimalPIESNO

61. OptimalViewOrderingWithAntipodalSymmetry

62. OptimalViewOrderingWithNonAntipodalSymmetry

63. PenalizedSpline

64. Permutations

65. PowerfulListing

66. Rational

67. Set

68. SheppLogan2D

69. SheppLogan3D

70. Shore1DSpline

71. SignalFixedPointFinder

72. SNRFixedPointFinder

73. Sort

74. SpecialFunction

75. SphericalSpline

76. SVD

77. TensorialAveraging

78. Tools

79. TwoDimensionalSphericalSignalTransformationalFramework

80. UniformPointSet

81. WilcoxonMannWhitneyTest

This Java class implements the Wilcoxon-Mann-Whitney test without tied ranks. The p-value can be computed exactly or approximately via normal approximation. The exact p-value computation is based on a modified version of Harding's algorithm Ref.[1] and was presented in Ref.[2]. Below is an example taken from page 295 of Lothar Sachs' book on Applied Statistics: A handbook of techniques, 2nd Ed. Springer, NY 1984.

Cited References:

[1]. Harding, E.F., 1984. An efficient, minimal-storage procedure for calculating the Mann-Whitney U, Generalized U and similar distributions. Journal of the Royal Statistical Society. Series C (Applied Statistics) 33, 1-6.

[2]. Koay CG, Yeh P-H, Ollinger JM, İrfanoğlu MO, Pierpaoli C, Basser PJ, Oakes TR, Riedy G. Tract Orientation and Angular Dispersion Deviation Indicator (TOADDI): A framework for single-subject analysis in diffusion tensor imaging. NeuroImage 2015. DOI: 10.1016/j.neuroimage.2015.11.046

You need the following class:

import cgk.WilcoxonMannWhitneyTestWithTies;

//------------------------------------------------------------------------

public static void wilcoxonMannWhitneyTestExample(){

double[] A = {7,14,22,36,40,48,49,52};

double[] B = {3,5,6,10,17,18,20,39};

boolean useNormalApproximation = false;

WilcoxonMannWhitneyTest wmw = new WilcoxonMannWhitneyTest(A.length,B.length, useNormalApproximation);

double[] results = wmw.runTest(A, B);

System.out.println("p-value by normal approximation? : "+wmw.normalApproximationEmployed() );

System.out.println("Umin = "+results[0]);

System.out.println("p-value = "+results[1]);

//Using static methods to compute the table of exact p-values or approximate p-values.

// Uncomment out the two statements below if you would like to see the exact and approximate p-values.

//System.out.println("Asymptotic p-value = "+MAT.toString( WilcoxonMannWhitneyTest.asymptoticPValueTable(A.length, B.length) ) );

//System.out.println("Exact p-value table ="+MAT.toString(WilcoxonMannWhitneyTest.exactPValueList(A.length, B.length)));

}

//------------------------------------------------------------------------

If "useNormalApproximation" is set to "true", the printout should be

p-value by normal approximation? : true

Umin = 11.0

p-value = 0.027423154381354675

If "useNormalApproximation" is set to "false", the printout should be

p-value by normal approximation? : false

Umin = 11.0

p-value = 0.028127428127428127

Therefore, the p-value by normal approximation is less than the exact p-value.

82. WilcoxonMannWhitneyTestWithTies

This Java class implements the Wilcoxon-Mann-Whitney test with tied ranks. The p-value can be computed exactly or approximately via normal approximation. Below is an example taken from page 296 of Lothar Sachs' book on Applied Statistics: A handbook of techniques, 2nd Ed. Springer, NY 1984. The example shown in Sachs' book has several minor errors. For example, R1, R2, U1 and U2 should be 84.5, 51.5, 15.5 and 48.5, respectively.

You need the following class:

import cgk.WilcoxonMannWhitneyTestWithTies;

//---------------------------------------------------------------------

public static void wilcoxonMannWhitneyTestWithTiesExample(){

double[] A = {5,5,8,10,13,13,13,15};

double[] B = {3,3,4,5,5,8,9,16};

boolean useNormalApproximation = true;

WilcoxonMannWhitneyTestWithTies wmw = new WilcoxonMannWhitneyTestWithTies(A,B,useNormalApproximation);

System.out.println( "p-value by normal approximation? : "+wmw.normalApproximationEmployed() );

System.out.println( "Umin = "+wmw.getUmin());

System.out.println( "p-Value = "+wmw.getPValue());

}

//---------------------------------------------------------------------

If "useNormalApproximation" is set to "true", the printout should be

p-value by normal approximation? : true

Umin = 15.5

p-value = 0.07949907143493151

If "useNormalApproximation" is set to "false", the printout should be

p-value by normal approximation? : false

Umin = 15.5

p-value = 0.08873348873348873

Therefore, the p-value by normal approximation is less than the exact p-value.