Lin's description for the stacking operator in Lin(2008)p.40, Lin(2010) p.8.
The definition for stacking operations of both Lin(2008) and He(2009) are exactly the same, and the terms used are the same, In Lin (2008) they avoided using this term "stack", but it is used in Lin (2010), that indicating the author wanted to bypass the term "stack"
The author describes several stacking methods. When naming methods, the principles and guidelines of the method should be accurately given, Just like the method of finding extreme values, there are extreme value principles and discriminant criterion. The criterion is very rigorous. What meets the condition is the expected result, and the one that violates the condition is not the desired result, so it needs to be proved. On the other hand, saying that I got the desired result, so my method is correct, which is not correct. The author uses “The basic idea of our method is quite simple” to avoid proof. What difficulties have you encountered? For example, the author's condition "D1 and D2 are generally not LHD." is not a feasible condition. The fact is that the result of stacking two orthogonal matrices is still an orthogonal matrix, but in most cases it is not an OLHD. Even if both matrices are OLHD, the stacking result may not be an OLHD. A matrix that is not LHD must not be an OLHD. The result of stacking two matrices that are not LHD is an OLHD, which can only be an accident, not an inevitable law. The examples are too numerous to list.
with the help of operation (2.1), a larger size OLHD can be constructed based on the existing smaller size OLHD. It is a creation.
Theorem 1 in Lin (2010) is the same as theorem 2.1 in Lin (2008), and the conditions of the method are given. Theorem 1 in Lin (2010) is the same as theorem 2.1 in Lin (2008), and the conditions of the method are given. This is a theorem that concerns the Global, but the authors do not prove it. The basis cannot be found in the existing tensor product theory. The simplest proof method needs to borrow the stacking operation of designs and the stacking principle of orthogonal designs. So they detour, but in the wrong direction.
An example (example 2) is used to demonstrate the application of the theorem. This example requires the matrix A be (1,1)T, C be (1/2,-1/2)T, in the above two conditions, such A and C obviously do not satisfy the conditions. In order to satisfy the conditions, the author forces defined that "plus ones" matrix and the individual vectors are all orthogonal matrices, So, (1,1)T is a Hadamard matrix, (1/2,-1/2)T is an OLHD, (x1,-x1)T is an orthogonal matrix, etc. Such definitions violated the basic common sense of mathematics. It is illegal. Such illegal operation can prove its correctness with the stackable of the orthogonal matrix, and the process is very simple. Why didn't the author do it? They dare not!
If B is an n×p OLHD, H is an n×p Hadamard sub-matrix, then (1,1)T⊗ B is a stacking of two B's, (1/2,-1/2)T⊗ H is the stacking of both H/2 and -H/2. Orthogonality is a special case of zero correlation. The two terms of the author's (2.1) formula are orthogonal. After multiplying by a real number, they are still orthogonal, and their sum is orthogonal. This is a typical stacking that is very easy to understand and has no obstacles in mathematical theory.
It is clear, when n is 3 or n=4k+2, the number of orthogonal columns in the matrix is zero; in other cases, the minimum number of orthogonal columns is 2. Unless the number of runs is very small, no one knows how many is the maximum number of orthogonal columns. The author's definition of the maximum number of orthogonal columns is wrong in concept and wrong in logic.
There is at least one orthogonal column in any matrix; Then every column is orthogonal; Of course, there is no matrix which has no orthogonal columns, and no matrix that is not orthogonal. This is an absolutely unacceptable theory.