Each layer is composed of multiple blocks which are then added together.
Each block is a Hadamard product (element-wise multiply) of an input vector with a weight vector. Then a Hadamard transform, then another Hadamard product with a self-switched weight vector.
The density is the number of such blocks in each layer. And depending on the value you choose can use less than or more that the n squared weight parameters of a conventional dense neural network layer.