The TRIGA (Training, Research, Isotopes, General Atomics) reactor is a type of research reactor designed for training, medical isotope production, and various nuclear experiments. Known for its inherent safety features, TRIGA reactors utilize a unique fuel composition that allows for self-regulating power excursions, making them ideal for educational and experimental purposes.
What makes TRIGA particularly unique is its operational accessibility and inherent safety. Unlike power reactors that typically operate continuously for long periods, TRIGA reactors are started up and shut down almost daily. This operational flexibility makes them ideal platforms for studying startup behavior, shutdown procedures, and transient conditions under well-controlled experimental setups.
Moreover, the TRIGA reactor was specifically designed with strong safety features, enabling not only routine steady-state operation but also a wide range of dynamic experiments such as pulse mode and square wave reactivity insertions. Although TRIGA reactors are considered well understood, there remain many nuanced aspects of their behavior that continue to challenge researchers and offer opportunities for deeper insight.
My project focuses on the implementation of Digital Twin Technology on the TRIGA reactor to enhance operational efficiency and safety. A Digital Twin is a virtual replica that continuously updates and interacts with real-world data, allowing us to simulate and predict the dynamic behavior of the reactor. This is crucial for:
Real-time monitoring of reactor performance
Accurate prediction of dynamic responses under different operating conditions
Providing control guidance to operators for optimized decision-making
Website development which serves as digital twin dashboard
Machine learning and artificial intelligence technologies are highly useful tools for explaining the behavior of complex reactors. However, if one does not understand the physical laws governing reactor behavior, these powerful tools become only half as effective. This project aims to leverage machine learning and AI techniques to predict key indicators that describe reactor behavior.
Develop and test a predictive model for keff using artificial intelligence.
Establish upper limits for key safety indicators used in BWR reactors.
Train and develop a predictive model using artificial neural networks that can handle data with flexible residuals.
GLLSM is one of the representative data assimilation techniques in the field of nuclear criticality safety. It aims to provide improved predictive capabilities for new reactor models by analyzing how keff values are determined across different reactor models in high-dimensional nuclear data and comparing them with experimental results.
Compare various data assimilation methodologies and conduct a thorough analysis of their mathematical rationale.
Propose a new metric for evaluating the results of data assimilation.
Identify the strengths and limitations of the current GLLSM methodology and suggest possible alternatives.
Sensitivity analysis and uncertainty quantification are fundamental aspects of data science. In nearly all computational processes, there exists a sensitivity relationship between input and output variables or among intermediate parameters. Additionally, most input parameters carry inherent uncertainties, and in some cases, only rough estimations are available. These uncertainties propagate through the system, ultimately affecting the uncertainty of the final results.
Conduct research on constraint sensitivity analysis applicable to neutronics.
Develop a new metric to enhance the traditionally used ck similarity measure.
Formulate a nonlinear relevance index based on information theory.