Based on the tensor product theory, C.D.Lin defines a matrix operation (Lin(2008)page 16, Lin(2010)page 3) as an extension constructing method of the orthogonal hypercube design, which has an honorable and hard-to-discard proprietary code number was (2.1). I believe this is the author's original extension constructing method research plan before the summer of 2006. If you go down this route, it's a good concept.
Unfortunately, in Lin (2010) it is just a cover used to cover up theft.
For the convenience of description, we add an L before this number.
L = A⊗B + γ C⊗D ............................(L2.1)
Its application conditions are the following Lemma 1, Lemma 2 and Theorem 1, (Only the part of these propositions related to the discussion is extracted here, see the original text for details)
Lemma 1. Let γ=n2. Then design L in (L2.1) is a Latin hypercube if:
(i) both B and C are Latin hypercubes and ......
Lemma 2. Design L in (L2.1) is orthogonal if:
(i) A, B, C and D are all orthogonal, and ......
Theorem 1. Let γ= n2. Then design L in (L2.1) is an orthogonal Latin hypercube if:
(i) A and D are orthogonal matrices of ±1;
(ii) B and C are orthogonal Latin hypercubes; ......
This is an important highlight of Lin;s research work。However, there are no examples of successful applications in Lin(2010). Example 2 in Lin(2010) wants to obtain L=OLH(32,12) from B=OLH(16,12), the hidden formula is that
L=(1,1)T⊗B+γ(1/2,-1/2)T⊗D.........(6.1)
where D is the first 12 columns of the Hadamard matrix of order 16. Matching formula (L2.1), A = (1,1)T is not a Hadamard matrix, and not an orthogonal matrix, and C = (1/2, −1/2)T is not an OLHD and does not satisfy the conditions in Lemma 1, Lemma 2 and Theorem 1, then Theorem 1 cannot be applied.
According to the property of the tensor product, when two matrices X and Y are orthogonal, X ⊗ Y is an orthogonal matrix. If X or Y are not orthogonal, there is no guarantee that X ⊗ Y is orthogonal. The tensor product theory cannot guarantee that L in (6.1) is OLHD. Therefore, Example 2 cannot be used as an example of Theorem 1, but belongs to a different theoretical system, that is stacking. The reasonable way to achieve the function is the stacking operation of matrices. He (2009) defines the stacking operation, and describes and proves the stackable property of the zero-correlation matrix:
Figure 6.1 The definition of stacking operation and the proof of stackable properties of zero correlation matrices in He (2009)(translated)
cfc≠cfa+cfb can happen if A and B have different number of rows, For an orthogonal matrix X, the inner product of any two columns (xi,xj)= 0, and cf=0. As long as the two orthogonal matrices A and B have the same number of columns, the sum of the inner products of any corresponding two columns is always 0,
(ai,aj)+(bi,bj)≡ 0 ............(6.2)
Therefore, the two orthogonal matrices are allowed to have different numbers of rows. Orthogonality is a special case of zero correlation, and equation (6.2) is a direct result of the stackable nature of zero correlation matrices.
Use S(X,Y) to denote stacking operation of both X and Y, then Z=S(X,Y). The following properties of the stacking operation can be easily proved by applying algebraic knowledge, this article omits these proofs.
1. Self-stacking property, if the matrix X is orthogonal, then Z =S(X,X) is orthogonal.
2. Orderliness, if X and Y are orthogonal, S(Y,X) is also orthogonal, and S(Y,X) is isomorphic with S(X,Y) but not the same.
3. If two matrices X and Y are orthogonal matrices with the same dimension, then
S(X,X)+ S(Y,Y) = S(Y,Y)+ S(X,X) ............(6.3)
Both are orthogonal and identical.
4. The rule of multiplying by a constant. If X and Y are two orthogonal matrices with the same number of columns and α is a real numbers,
αS(X,Y)= S(αX,αY) ............(6.4)
It is an orthogonal matrix.
5. Composite. If X and Y are two orthogonal matrices with the same dimension, and α and β are two real numbers, the result Z is an orthogonal matrix.
Z=αS(X,X)+βS(Y,-Y)............(6.5)
6. Recurrence. The result of a stacking can be called by another stacking.
We now assume that A is a n× (m > 1) hypercube whose level spacing is δ. In equation (6.5), letting X=A,α = 1/ δ and β = 1, and Y is a Hadamard sub-matrix with the same dimension as X, then the result is L in Equation (6.2); If letting Y=A, β = 1/δ and α = 1, and X is a Hadamard sub-matrix with the same dimension as Y, and the result is U in Proposition 1 of Lin(2010). If A is orthogonal, then both L and U are orthogonal, which is based on the stackable properties of orthogonal matrices, not Theorem 1 and tensor product theory.
To legally achieve the goal of Example 2, the theoretical system of Lemma 1, Lemma 2, and Theorem 1 must get rid of, A new theoretical system is established through matrix stacking operation and the stackable property of orthogonal matrices to ensure the validity of (6.1). The author created a theory that violated common sense in mathematics and logic to bypass these discussions.
In the doctoral thesis Lin (2008), the author mandatory defines that "plus ones” matrix and the single vector are orthogonal matrices, "technically, orthogonal designs must have at least two factors, if a design B or C has only one factor, it is orthogonal by our definition." In Lin (2010), replaces the theory that a single vector is an orthonormal matrix by defining that non-existence is existence, that is 0=1. Thus, killing three birds with one stone, not only whitewash the existence theorem, whitewash Example 2 that violates mathematical common sense, but also whitewash the stacking method of stealing.
A single vector can be regarded as a matrix, but the only case in which a single vector can be regarded as an orthogonal matrix is vector 0 = (0,0,...,0). Except for full zeros matrices and a matrix [1] with only one element 1, any constant matrix is not an orthogonal matrix, and the vector 1 cannot be regarded as a sub-matrix of a Hadamard matrix. Use (1,1)T to pretend to be a Hadamard matrix and orthogonal matrices, (1/2,−1/2)T to pretend to be OLHD, and (x,−x)T to pretend to be an orthogonal matrix, that is a blatant fake.
This is a very strange mathematical method. Lin(2010) leads to Theorem 3 and all subsequent results, which applicated Example 2 rather than Theorem 1.
Theorem 3. Suppose that an OLH(n,m) is available where n is a multiple of 4 such that a Hadamard matrix of order n exists. Then we have that:
(i) the following orthogonal Latin hypercubes, an OLH(2n,m), an OLH(4n,2m), an OLH(8n,4m) and an OLH(16n,8m), can all be constructed;
(ii) all the following orthogonal Latin hypercubes, an OLH(2n+1,m), an OLH(4n+1,2m), an OLH(8n+1,4m) and an OLH(16n+1,8m) can also be constructed.
The author claims that the theory that the single vector is an orthogonal matrix ”is now strengthened”, and ”The basic idea of our method is quite simple”, and ”Theorem 3 is a very powerful result. By repeated application of Theorem 3, one can obtain many infinite series of orthogonal Latin hypercubes.”
This simple and very powerful method is stacking, where Theorem 1 is useless.
The implementation of part (i) of Theorem 3 is to use the algorithm of Example 2, which is stacking, not tensor product. The so-called repeated application of Theorem 3 is to recursively call the algorithm of Example 2. Example 2 of Lin (2010) is mathematically illogical, then a call to this Example 2 is mathematically illegal. The results that have not been proved by universal theory cannot be cited as evidence for another proposition, and instances are not evidence for a proposition. So the theorem 3 has no legitimacy. Lin claims a proof that the proof repeatedly uses results that have not been legitimately proven, whose legitimacy does not exist.
Subsequently, Lin defines a stacking operation that we discussed earlier. However, there is no corresponding discussion. It is never proved that the inner product of the results of stacking is 0, i.e., the equation (6) of this paper holds. This proves that she learned the stacking method after defining the operation (L2.1), which was her previous established route. If she knew the stacking operation before, why did Example 2 overcome obstacles through it by falsely defining a vector is an orthogonal matrix. The stacking method is stolen from He (2009).
For this stacking operation, Lin has a note: ”Note that Da and Db themselves are not necessarily Latin hypercubes.” Whether the result of stacking two orthogonal matrices is orthogonal has not been proven, let alone OLHD. Even if the two matrices stacked are both OLHDs, the stacking result may not be OLHD. The stacking of two matrices that are not LHD into OLHD can only be accidental, and it cannot be a universal law. Such a proposition must be proved, but it is not done.
Lin's context indicates that Lin once again confirmed that the OLHD with 12 orthogonal columns constructed by Steinberg et al.(2006) is not LHD, and it is only after dividing by 2.
Based on this background, Lin named the two stacking methods. They have nothing to do with Theorem 1, and there is no theoretical proof to guarantee their correctness. Lin admits that part (ii) of Theorem 3 was constructed using two stacking methods and has nothing to do with Theorem 1. As mentioned earlier, the whole Theorem 3 is not mathematically valid. The related Propositions,Tang2011 upgraded it to Theorem 4, It is the result of applying Theorem 3 and should be a corollary of Theorem 3. Theorem 3 is invalid, and theorem 4 is also invalid.
Figure 6.2 Proposition 3 in Lin(2010), and Tang2011 upgraded it to Theorem 4. None of them are related to Theorem 1.
The results of the full text of Lin(2010) were all obtained by three stacking methods, their correctness are guaranteed by the stackability of the orthogonal matrices. The core theorem 1 in Lin(2010) has never been used in the full text of Lin (2010). The full text of Lin (2010) is the result of fraud.
It is said that no one reported an error. That no one reported an error doesn't mean the result is correct. That the result is not wrong but the intermediate process is incorrect exactly proves that the result came from cheating.