Log-based Anomaly Detection Without Log Parsing
NeuralLog consists of the following components:
Preprocessing: Special characters and numbers are removed from log messages.
Neural Representation: Semantic vectors are extracted from log messages using BERT.
Transformer-based Classification: A transformer-based classification model containing Positional Encoding and Transformer Encoder is applied to detect anomalies.
Log-based Anomaly Detection with Deep Learning: How Far Are We?
Software-intensive systems produce logs for troubleshooting purposes. Recently, many deep learning models have been proposed to automatically detect system anomalies based on log data. These models typically claim very high detection accuracy. For example, most models report an F-measure greater than 0.9 on the commonly-used HDFS dataset. To achieve a profound understanding of how far we are from solving the problem of log-based anomaly detection, in this paper, we conduct an in-depth analysis of five state-of-the-art deep learning-based models for detecting system anomalies on four public log datasets. Our experiments focus on several aspects of model evaluation, including training data selection, data grouping, class distribution, data noise, and early detection ability. Our results point out that all these aspects have significant impact on the evaluation, and that all the studied models do not always work well. The problem of log-based anomaly detection has not been solved yet. Based on our findings, we also suggest possible future work. This repository provides the implementation of recent log-based anomaly detection methods.
Log Parsing with Prompt-based Few-shot Learning
Idea – Log parsing as a label token prediction task
Predict PARAM at the position of parameters
Predict the original tokens at the position of keywords
LogPPT consists of the following components:
Adaptive Random Sampling algorithm: A few-shot data sampling algorithm, which is used to select K-labelled logs for training (K is small).
Few-shot Data Sampling: An adaptive random sampling-based method for selecting K-labelled logs for training (K is small).
Prompt-based Parsing: A module to tune a pre-trained language model using prompt tuning for log parsing