Regression Model for Online News Popularity Using Python Take 2

Template Credit: Adapted from a template made available byDr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The purpose of this project is to construct a prediction model using various machine learning algorithms and to document the end-to-end steps using a template. The Online News Popularity dataset is a regression situation where we are trying to predict the value of a continuous variable.

INTRODUCTION: This dataset summarizes a heterogeneous set of features about articles published by Mashable in a period of two years. The goal is to predict the article’s popularity level in social networks. The dataset does not contain the original content, but some statistics associated with it. The original content can be publicly accessed and retrieved using the provided URLs.

Many thanks to K. Fernandes, P. Vinagre and P. Cortez. A Proactive Intelligent Decision Support System for Predicting the Popularity of Online News. Proceedings of the 17th EPIA 2015 – Portuguese Conference on Artificial Intelligence, September, Coimbra, Portugal, for making the dataset and benchmarking information available.

In iteration Take1, the script focused on evaluating various machine learning algorithms and identifying the algorithm that produces the best accuracy result. Iteration Take1 established a baseline performance regarding accuracy and processing time.

For this iteration, we will examine the feasibility of using a dimensionality reduction technique of ranking the attribute importance with the Lasso algorithm. Afterward, we will eliminate the features that do not contribute to the cumulative importance of 0.99 (or 99%).

ANALYSIS: From the previous iteration Take1, the baseline performance of the machine learning algorithms achieved an average RMSE of 13020. Two algorithms (Linear Regression and ElasticNet) achieved the top RMSE scores after the first round of modeling. After a series of tuning trials, ElasticNet turned in the top result using the training data. It achieved the best RMSE of 11273. Using the optimized tuning parameter available, the Stochastic Gradient Boosting algorithm processed the validation dataset with an RMSE of 12089, which was slightly worse than the accuracy of the training data.

In the current iteration, the baseline performance of the machine learning algorithms achieved an average RMSE of 13128. Two algorithms (Linear Regression and ElasticNet) achieved the top RMSE scores after the first round of modeling. After a series of tuning trials, ElasticNet turned in the top result using the training data. It achieved the best RMSE of 11358. Using the optimized tuning parameter available, the ElasticNet algorithm processed the validation dataset with an RMSE of 12146, which was slightly worse than the accuracy of the training data.

From the model-building activities, the number of attributes went from 58 down to 30 after eliminating 28 attributes. The processing time went from 15 minutes 1 second in iteration Take1 up to 17 minutes 37 seconds in iteration Take2, which was due to the additional time required for the feature selection processing.

CONCLUSION: The feature selection techniques helped by cutting down the attributes and yet still retained a comparable level of accuracy. For this dataset, ElasticNet should be considered for further modeling or production use.

Dataset Used: Online News Popularity Dataset

Dataset ML Model: Regression with numerical attributes

Dataset Reference: https://archive.ics.uci.edu/ml/datasets/Online+News+Popularity

The HTML formatted report can be found here on GitHub.