Recently, there has been an increase in the so-called "dead drop" method, where a dropper leaves drugs at a predetermined location and the buyer picks them up later. As illegal drug trades like dead drops increase, the domestic drug problem is becoming more severe. To address this issue, we intend to use Atomic action recognition to record and extract suspicious dead drop behavior and the suspect's ID, then input this data into a sequential model-based Composite action recognition model. As the atomic actions of a specific suspect accumulate, the current composite action state is estimated based on this information to recognize dead drops.
Atomic Action Recognition is a subfield of action recognition that focuses on identifying and classifying the smallest possible units of action or movement within a sequence. These atomic actions are the fundamental building blocks of more complex activities. For example, in the context of human activities, atomic actions might include movements like raising a hand, taking a step, or turning the head. In this project, we aim to detect dead drops by identifying atomic actions such as digging, looking around, and checking a phone through Atomic Action Recognition, and ultimately using Composite Action Recognition.
In this research, composite action recognition is employed to detect drug dead dropping. Initially, information such as action labels, person IDs, and action durations (start time and end time) is extracted through previously performed atomic action recognition. This extracted information is then analyzed using sequential models such as LSTM, GNN, and Transformer. By doing so, we identify patterns of actions that occur in sequence, such as loitering, taking photos, digging, burying, and placing objects in secluded areas. This method allows for the detection of drug dead dropping by recognizing complex action sequences that are not easily identified through atomic action recognition alone.
YeEun Joo, Junbeam Moon, Seangmin Lee, Hyunji Lee and Soon Ki Jung, Context Aware Global–Local Fusion For Weakly Supervised Video Anomaly Detection, 2025 8th Artificial Intelligence and Cloud Computing Conference(AICCC 2025), (2025.12.20 ~ 2025.12.22)
Junbeom Moon, Jiye Won, YeEun Joo, Sehwan Heo and Soon Ki Jung, Hierarchical Action Understanding : Fine-to-Coarse Reasoning Framework for Video Interpretation, the IEEE International Conference on Advanced Visual and Signal-Based Systems(AVSS 2025), (2025.08.11 ~ 2025.08.13)
Junbeom Moon, Sehwan Heo, Jiye Won, Jaeseok Jang and Soon Ki Jung, State Space Model Based VideoMAE Enhancement for Efficient Video Action Classification, The International Conference on Artificial Intelligence in Information and Communication,2025(ICAIIC 2025), (2025.02.18 ~ 2025.02.21)
Sehwan Heo, Junbeom Moon, Jiye Won and Soon Ki Jung, Improving Computational Efficiency in Video Analysis with Mamba-Based Architectures, The 13th International Conference on Smart Media and Applications (SMA 2024), (2024.12.18 ~ 2024.12.22)
Sehwan Heo, JunBeom Moon, YeEun Joo and Soon Ki Jung, Enhancing Memory Efficiency by Redesigning Model Architecture based on Cross Attention Mechanism, The 7th International Conference on Culture Technology and Applications (ICCT2024), (2024.10. 23 ~ 2024.10.26)
JunBeom Moon, Soon Ki Jung, Composite Action Recognition Using Hierarchical Classification and Atomic Action Sequences, The 20th International Conference on Multimedia Information Technology and Applications (MITA2024), (2024.07.23 ~ 2024.07.26)
Sehwan Heo, Soon Ki Jung, A Survey on Vision Transformer-based Action Recognition Models, The 20th International Conference on Multimedia Information Technology and Applications (MITA2024), (2024.07.23 ~ 2024.07.26)