Research Highlight

Learning nonlinear level sets for dimensionality reduction in function approximation

Achievement


One of major uncertainty resources in building predictive models is the limited information (i.e., training data) when the input space is high-dimensional. In this case, an effective way to reduce uncertainty is to reduce the dimension of the input space. The question we try to answer is “how to effectively reduce the dimension of a function with independent input variables?”. We developed two types of nonlinear level set learning models during the project. In Ref. [1], we developed RevNet-based level set learning method, where the key novelty is the definition of the new loss function for training the RevNets. Even though the RevNet based method provides amazing performance, e.g., in Figure 1, we found that it cannot handle functions with critical points, e.g., the saddle points shown in Figure 2. To address this issue, we improved our method in Ref. [2] by replacing the RevNet with the PRNN. Since PRNN is not exactly reversible, we added a new regularization term in the loss to encourage the pseudo reversibility. With the use of the PRNN, our method can learn level sets that having critical points, which has not been achieved by the state-of-the art dimension reduction methods. 

Figure 1: Comparison of neural network approximation of  𝑓=𝑥^2+ ...+ x_20^2  in  [0,1]^20  with and without using our method. (Left column): our method completely avoided overfitting issue and the relative error is 1.6%; (Middle column): direct neural network regression results in severe overfitting and 35.1% relative error; (Right column): direct neural network regression with more hidden neurons leads to much worse overfitting and 51.3% relative error. 

Figure 2: Level set learning and function approximation results produced by our method for  𝑓=𝑥_1^2  −𝑥_2^2 in [−1,1]^2, where the critical point is at [0, 0]. While the RevNet model in cannot learn the level sets of this function using a reversible transform, the PRNN model successfully learns the level sets as shown in the quiver plot (left), i.e., the gradient field (blue arrows) are perpendicular to the red vector field learned by the PRNN model. 

Publications

[1] G. Zhang, J. Zhang and J. Hinkle, Learning nonlinear level sets for dimensionality reduction in function approximation, Advances in Neural Information Processing Systems (NeurIPS), 32, pp. 13199-13208, 2019.

[2] Y. Teng, Z. Wang, L. Ju, A. Gruber, G. Zhang, Level set learning with pseudo-reversible neural networks for nonlinear dimension reduction in function approximation, SIAM Journal on Scientific Computing, Vol 45(3), pp. A1148-A1171, 2023.