1. Contribution and Significance
This paper presents a systematic comparative study of eight deep learning architectures for seismic signal denoising, addressing a critical challenge in building reliable earthquake prediction systems.
Unlike prior works that focus on improving single-model performance, this research emphasizes cross-architecture evaluation under controlled experimental settings. The comprehensive design and standardized comparison metrics make the findings highly reproducible and practically valuable for future AI-based seismology research.
The main contribution lies in identifying PatchTST, a Transformer-based model, as the most effective structure for denoising seismic waveforms. The paper demonstrates that PatchTST achieves 0.0004 MSE, 99.98% accuracy, and 0% false negatives, outperforming CNN, RNN, and Autoencoder-based models. This result validates the potential of patch-based temporal representation learning for complex time-series data such as seismic signals.
2. Methodology Evaluation
The methodology is rigorous and transparent.
Dataset: The use of 40,000 MSEED-format samples with real earthquake and noise data ensures realism and robustness.
Preprocessing: The authors carefully apply DC removal, cosine tapering, bandpass filtering (0.1–10 Hz), and normalization. This process accurately reflects the physical characteristics of Korean seismic data.
Experimental Control: All models were trained with identical hyperparameters (Adam optimizer, lr=1e-3, batch=16, 20 epochs) on the same hardware, ensuring fairness in performance comparison.
The study also introduces a multi-layer evaluation scheme—using both regression metrics (MSE, MAE) and classification metrics (Accuracy, F1, FPR, FNR)—which gives a holistic view of model performance from both statistical and operational perspectives.
Reproducibility and Fairness: Every model was tested under strictly equal conditions, removing typical confounders in comparative AI studies.
Empirical Clarity: Quantitative results clearly show the dominance of Transformer-based approaches in handling both local and global dependencies.
Applicability: The focus on false-negative reduction aligns directly with the safety-critical nature of earthquake prediction, enhancing societal relevance.
4. Limitations and Areas for Improvement
While the paper is methodically sound, several areas could be expanded in future work:
The dataset is limited to a single national seismic source (Korean region). Generalization to global datasets or heterogeneous geological conditions should be tested.
The FFTformer and Denoising Autoencoder results are notably poor, but the analysis of why these models fail could be deepened. A spectral attention visualization or error distribution analysis would strengthen interpretability.
The study could benefit from additional uncertainty quantification and real-time performance evaluation to bridge the gap between laboratory and deployment environments.
This paper represents a well-executed comparative benchmark in the intersection of AI and seismology.
It highlights that recent Transformer models—particularly PatchTST—outperform traditional CNN, RNN, and Autoencoder structures in seismic noise reduction.
The work is both methodologically rigorous and application-oriented, serving as a strong foundation for future research on reliable, real-time earthquake detection systems.
6. Reviewer’s Closing Remarks
This research effectively bridges data-driven AI approaches and geophysical signal processing.
Its disciplined experimentation, transparent reporting, and clear conclusions make it a credible contribution to applied machine learning in the geoscience domain.
Further studies extending to ensemble architectures or uncertainty-aware prediction would likely enhance both scientific and practical impact.