Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.
INTRODUCTION: Mining activity has always been connected with the occurrence of dangers which are commonly called mining hazards. A special case of such a threat is a seismic hazard which frequently occurs in many underground mines. Seismic hazard is the hardest detectable and predictable of natural hazards, and it is comparable to an earthquake. The complexity of seismic processes and big disproportion between the number of low-energy seismic events and the number of high-energy phenomena causes the statistical techniques to be insufficient to predict seismic hazard. Therefore, it is essential to search for new opportunities for better hazard prediction, also using machine learning methods.
In iterations Take1 and Take2, we had three algorithms with high accuracy and ROC results but with strong biases due to the imbalance of our dataset. For this iteration, we will examine the feasibility of using the SMOTE technique to balance the dataset.
CONCLUSION: From the previous Take1 iteration, the baseline performance of the eight algorithms achieved an average accuracy of 91.94%. Three algorithms (Logistic Regression, Support Vector Machine, and Stochastic Gradient Boosting) achieved the top three accuracy scores after the first round of modeling. After a series of tuning trials, the support vector machine algorithm turned in the best accuracy result of 93.36%, but with very low precision and recall scores for the positive cases when processing the validation dataset. With an imbalanced dataset we have on-hand, we needed to look for another metric or another approach to evaluate the models.
From the previous Take2 iteration, the baseline performance of the eight algorithms achieved an average ROC score of 66.16%. Three algorithms (Logistic Regression, AdaBoost, and Stochastic Gradient Boosting) achieved the top three ROC scores after the first round of modeling. After a series of tuning trials, Stochastic Gradient Boosting turned in the best ROC result of 77.98%, but with a dismal precision and recall scores.
From the current iteration, the baseline performance of the ten algorithms achieved an average accuracy score of 75.83%. Three algorithms (Random Forest, Extra Trees, and Stochastic Gradient Boosting) achieved the top three ROC scores after the first round of modeling. After a series of tuning trials, Extra Trees turned in the best accuracy result of 96.38%, but with still-very-low precision and recall scores.
The ROC metric has given us a more viable way to evaluate the models, other than using the accuracy scores. Also, the SMOTE technique helped to make the model evaluation more realistic with the imbalanced dataset we have. For this project, however, the modeling choice was inconclusive due to the data imbalance issue that still requires a resolution.
Dataset Used: Seismic Bumps Data Set
Dataset ML Model: Binary classification with numerical and categorical attributes
Dataset Reference: https://archive.ics.uci.edu/ml/datasets/seismic-bumps
The HTML formatted report can be found here on GitHub.