PLELog is a novel approach for log-based anomaly detection via probabilistic label estimation.
It is designed to effectively detect anomalies in unlabeled logs and meanwhile avoid the manual labeling effort for training data generation.
We use semantic information within log events as fixed-length vectors and apply HDBSCAN to automatically clustering log sequences.
After that, we also propose a Probabilistic Label Estimation approach to reduce the noises introduced by error labeling and put “labeled” instances into attention-based GRU network for training.
We conducted an empirical study to evaluate the effectiveness of PLELog on two open-source log data (i.e., HDFS and BGL). The results demonstrate the effectiveness of PLELog.
In particular, PLELog has been applied to two real-world systems from a university and a large corporation, further demonstrating its practicability.
Project Structure
├─approaches #HDBSCAN & RNN approaches here, including training, validating, and testing.
├─config
├─data #Code for data processing.
├─utils
├─dataset
│ ├─BGL #Sample data for BGL (Quick start)
├─model #RNN models.
├─module #Anomaly detection modules, including classifier, Attention, etc.
├─outmodel #Model parameters for trained models, detailed save path is set in config files.
├─logs
├─output_res #Output result of Attention-Based GRU classification model.
├─pipeline.py #Main entrance code.
└─test.py #Quick start for PLELog
Datasets
We used 2 open-source log datasets, HDFS and BGL.
In the future, we are planning on testing PLELog on more log data.
Note: We attach great importance to the reproducibility of PLELog. To run and reproduce our results, please try to install the suggested version of the key packages.
The mainly required python packages including PyTorch, overrides, hdbscan, scikit-learn.
Anaconda is recommended to manage those packages and their versions.
hdbscan and overrides are not available while using anaconda, try using pip.
Preparation
You need to follow these steps to completely run PLELog.
Step 1: To run PLELog on different log data, create a directory under dataset folder using unique and memorable name(e.g. HDFS and BGL). PLELog will try to find the related files and create logs and results according to this name.
Step 2: Move target log file (plain text, each raw contains one log message) into the folder of step 1.
Step 3: Run utils/Drain.py (make sure it has proper parameters) to finish log parsing and extract log templates. You can find the details about Drain parser from IBM.
Note: Since log can be very different, here in this repository, we only provide the processing approach of HDFS and BGL w.r.t our experimental setting.
If you want to apply PLELog on new log data, please refer to the prepare_data method in pipeline.py to add new pre-process methods.
Anomaly Detection
Complete: You can run PLELog from the ground up by running pipeline.py after the preparation. The results will be shown in the logs folder named after detailed settings. And the classification results are saved in the output_res folder for further analysis.
Quick Start: Since HDBSCAN may need hours to finish, we provide a trained model (on BGL dataset) and a test input as a quick start for PLELog, just run test.py under the correct environment.
Logs will be written in log/test.log, you can find the results at the end of the file.
Feel free to play with PLELog through the command parameters below: (The results of different settings should be separated, don’t worry! :P)
usage: pipeline.py [-h] [--config_file CONFIG_FILE] [--gpu GPU] [--hdbscan_option HDBSCAN_OPTION]
[--dataset DATASET] [--train_ratio TRAIN_RATIO] [--dev_ratio DEV_RATIO]
[--test_ratio TEST_RATIO] [--min_cluster_size MIN_CLUSTER_SIZE]
[--min_samples MIN_SAMPLES] [--reduce_dim REDUCE_DIM]
optional arguments:
-h, --help show this help message and exit
--config_file CONFIG_FILE
Configuration file for Attention-Based GRU Network.
--gpu GPU GPU ID if using cuda, -1 if cpu.
--hdbscan_option HDBSCAN_OPTION
Different strategies of HDBSCAN clustering. 0 for PLELog_noP, 1 for PLELog, -1 for upperbound.
--dataset DATASET
Choose dataset, HDFS or BGL.
--train_ratio TRAIN_RATIO
Ratio of train data. Default 6.
--dev_ratio DEV_RATIO
Ratio of dev data. Default 1.
--test_ratio TEST_RATIO
Ratio of test data. Default 3.
--min_cluster_size MIN_CLUSTER_SIZE
Minimum cluster size, a parameter of HDBSCAN.
--min_samples MIN_SAMPLES
Minimum samples, a parameter of HDBSCAN.
--reduce_dim REDUCE_DIM
Target dimension of FastICA.
--thredshold THRESHOLD
Threshold for final classification, any instance with "anomalous score" higher than this threshold will be regarded as anomaly.
Contact
We are happy to see PLELog being applied in the real world and willing to contribute to the community. Feel free to contact us if you have any question!
Authors information:
# PLELog
This is the basic implementation of our submission in ICSE 2021: Semi-supervised Log-based Anomaly Detection via Probabilistic Label Estimation.
Description
PLELog
is a novel approach for log-based anomaly detection via probabilistic label estimation. It is designed to effectively detect anomalies in unlabeled logs and meanwhile avoid the manual labeling effort for training data generation. We use semantic information within log events as fixed-length vectors and applyHDBSCAN
to automatically clustering log sequences. After that, we also propose a Probabilistic Label Estimation approach to reduce the noises introduced by error labeling and put “labeled” instances intoattention-based GRU network
for training. We conducted an empirical study to evaluate the effectiveness ofPLELog
on two open-source log data (i.e., HDFS and BGL). The results demonstrate the effectiveness ofPLELog
. In particular,PLELog
has been applied to two real-world systems from a university and a large corporation, further demonstrating its practicability.Project Structure
Datasets
We used
2
open-source log datasets, HDFS and BGL. In the future, we are planning on testingPLELog
on more log data.Reproducibility
Environment
Note: We attach great importance to the reproducibility of
PLELog
. To run and reproduce our results, please try to install the suggested version of the key packages.Key Packages:
The mainly required python packages including PyTorch, overrides, hdbscan, scikit-learn.
Anaconda
is recommended to manage those packages and their versions. hdbscan and overrides are not available while using anaconda, try using pip.Preparation
You need to follow these steps to completely run
PLELog
.PLELog
on different log data, create a directory underdataset
folder using unique and memorable name(e.g. HDFS and BGL).PLELog
will try to find the related files and create logs and results according to this name.utils/Drain.py
(make sure it has proper parameters) to finish log parsing and extract log templates. You can find the details about Drain parser from IBM.nlp-word.vec
and put it underdataset
folder.Note: Since log can be very different, here in this repository, we only provide the processing approach of HDFS and BGL w.r.t our experimental setting. If you want to apply
PLELog
on new log data, please refer to theprepare_data
method inpipeline.py
to add new pre-process methods.Anomaly Detection
PLELog
from the ground up by runningpipeline.py
after the preparation. The results will be shown in thelogs
folder named after detailed settings. And the classification results are saved in theoutput_res
folder for further analysis.HDBSCAN
may need hours to finish, we provide a trained model (onBGL
dataset) and a test input as a quick start forPLELog
, just runtest.py
under the correct environment. Logs will be written inlog/test.log
, you can find the results at the end of the file. Feel free to play withPLELog
through the command parameters below: (The results of different settings should be separated, don’t worry! :P)Contact
We are happy to see
PLELog
being applied in the real world and willing to contribute to the community. Feel free to contact us if you have any question! Authors information:* corresponding author