Thursday, November 21, 2019

Working Progress 8: Random Forest

Random Forest:

It technically is an ensemble method (based on the divide-and-conquer approach) of decision trees generated on a randomly split dataset. The collection of decision tree classifiers is also known as the forest. The individual decision trees are generated using an attribute selection indicator such as information gain, gain ratio, or Gini index for each attribute. Each tree depends on an independent random sample. In a classification problem, each tree votes and the most popular class is chosen as the final result. In the case of regression, the average of all the tree outputs is considered as the final result. It does not suffer from the overfitting problem. The main reason is that it takes the average of all the predictions, which cancels out the biases.

It works in four steps:

1. Select random samples from a given dataset.

2. Construct a decision tree for each sample and get a prediction result from each decision tree.

3. Perform a vote for each predicted result.

4. Select the prediction result with the most votes as the final prediction.

Tunable Parameters:

max_depth=8
random_state=0
n_estimators=100

Working Progress 6: Gradient Boosting

Gradientboost:

Adaboost was further developed as a numerical optimization problem where the objective is to minimize the loss of the model by adding weak learners using a gradient descent like procedure.These class of algorithms were described as a stage-wise additive model. This is because one new weak learner is added at a time and existing weak learners in the model are frozen and left unchanged.

Gradient boosting involves three elements:


1. A loss function to be optimized. The loss function used depends on the type of problem being solved. It must be differentiable, but many standard loss functions are supported and you can define your own.

2. A weak learner to make predictions. Decision trees are used as the weak learner in gradient boosting. Trees are constructed in a greedy manner, choosing the best split points based on purity scores like Gini or to minimize the loss.

3. An additive model to add weak learners to minimize the loss function. Trees are added one at a time, and existing trees in the model are not changed. A gradient descent procedure is used to minimize the loss when adding trees. After calculating the loss, to perform the gradient descent procedure, we must add a tree to the model that reduces the loss (i.e. follow the gradient).

Tuned Parameters:

  • n_estimators=100
  • learning_rate=0.1
  • max_depth=4
  • loss='ls'
What do these parameters mean?
loss: {‘ls’, ‘lad’, ‘huber’, ‘quantile’}, optional (default=’ls’) loss function to be optimized. ‘ls’ refers to least squares regression. ‘lad’ (least absolute deviation) is a highly robust loss function solely based on order information of the input variables. ‘huber’ is a combination of the two. ‘quantile’ allows quantile regression (use alpha to specify the quantile).

learning_rate: float, optional (default=0.1) learning rate shrinks the contribution of each tree by learning_rate. There is a trade-off between learning_rate and n_estimators.

n_estimators: int (default=100) The number of boosting stages to perform. Gradient boosting is fairly robust to over-fitting so a large number usually results in better performance.

Working Progress 5: Adaboost Regressor

Adaboost Regressor:

Adaboost combines multiple classifiers to increase the accuracy of classification. It is an iterative ensemble method. AdaBoost classifier builds a strong classifier by combining multiple poorly performing classifiers so that you will get high accuracy strong classifier. The basic concept behind Adaboost is to set the weights of classifiers and training the data sample in each iteration such that it ensures the accurate predictions of unusual observations.

1. Initially, Adaboost selects a training subset randomly.


2. It iteratively trains the AdaBoost machine learning model by selecting the training set based on the accurate prediction of the last training.


3. It assigns the higher weight to wrong classified observations so that in the next iteration these observations will get a high probability for classification.


4. Also, it assigns the weight to the trained classifier in each iteration according to the accuracy of the classifier. The more accurate classifier will get high weight.


5. This process iterates until the complete training data fits without any error or until it reached the specified maximum number of estimators.


6. To classify, perform a "vote" across all of the learning algorithms you built.




Here the learning parameters that we chose for our model were:


Base estimator: Decision Tree Regressor with max depth as 4.

Learning Rate: 0.01

Number of estimators: 5000

Random State: 42


What do these tunable parameters mean:

base_estimator: object, optional (default=None) The base estimator from which the boosted ensemble is built. Support for sample weighting is required. If None, then the base estimator is DecisionTreeRegressor(max_depth=3)

n_estimators: integer, optional (default=50) The maximum number of estimators at which boosting is terminated. In case of a perfect fit, the learning procedure is stopped early.

learning_rate : float, optional (default=1.) The learning rate shrinks the contribution of each regressor by learning_rate. There is a trade-off between learning_rate and n_estimators.

random_state: int, RandomState instance or None, optional (default=None) If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

Working Progress 4: Model Configuration for general Time Series

Here, for the given problem statement we need to have a single model configuration that would give the best results.

For SARIMAX: We ran the code for p,d,q values in the range of 0,1,2 and we got best results for p,d,q a 1,0,2.

For the RNN and LSTM based model, we ran the code for layers to be 1,2,3,4 and previous values on which the next value depends to be 7,15,30,45. we got the best results for 4 layers and 30 steps.

Working Progress 3: Holt and Winter's Model

Holt and Winters extended Holt’s method to capture seasonality. The Holt-Winters seasonal method comprises the forecast equation and three smoothing equations one for the level ℓtℓt, one for the trend btbt, and one for the seasonal component stst, with corresponding smoothing parameters αα, ββ and γγ. We use mm to denote the frequency of the seasonality, i.e., the number of seasons in a year. For example, for quarterly data m=4m=4, and for monthly data m=12.a
m=12

Reference: https://otexts.com/fpp2/holt-winters.html

Results were not that good and thus the model was discarded as SARIMAX gave better results.

Working Progress 2: Vanilla RNN and LSTM for prediction

As discussed earlier, RNN and LSTM can be used to predict future time series values. Here, we split the data into train and test where the data of the year 2016 is the testing data and the rest being the training data. The data is split in such a way that Xij(n_steps) = number of previous values the next values will depend on and Yi being the value to be predicted. Here the number of layers considered was 1 as it is vanilla RNN.
The result was 34.33 RMSE and the given predicted visualization:


Working Progress 1: Working On SARIMAX model

Seasonal Autoregressive Integrated Moving Average, SARIMA or Seasonal ARIMA, is an extension of ARIMA that explicitly supports univariate time-series data with a seasonal component. It adds three new hyperparameters to specify the autoregression (AR), differencing (I) and moving average (MA) for the seasonal component of the series, as well as an additional parameter for the period of the seasonality.


We checked for possible combinations of p,d,q,m for a given time series




After basic computations for the series, we got 0,1,0,12 as the p,d,q,m values for the given series. The result was 38.56 Root Mean Square Error and the visualization for the prediction for the upcoming year can be seen below:








For a better understanding of SARIMAX refer: https://machinelearningmastery.com/sarima-for-time-series-forecasting-in-python/

Proposed Model: Ensemble Learning


As our first step, we plan to remove outliers/noise from our dataset which diverges our forecasting from the actual trend. We observed that the algorithms performed better on a particular type of time-series data. Like for time series with more seasonal component prophet approach would give better results. Classical methods like ETS and ARIMA give better results with short term dependencies in time series whereas complex models like RNN/LSTM gave better results when there was long term correlation in time series.
 Thus we plan to do Ensemble learning for web traffic prediction. Ensemble learning combines multiple predictions (forecasts) from one or multiple methods to overcome the accuracy of simple prediction and to avoid possible overfit. The models that we would be working with are as follows –
1)     Ensembles of classical models-
·        Autoregressive (AR),
·        Moving Average (MA),
·        Autoregressive Moving Average (ARMA),
·        Autoregressive Integrated Moving Average (ARIMA), and
·        Seasonal Autoregressive Integrated Moving Average (SARIMA) models.
2)     Ensembles of LSTM models
3)     Ensembles of Prophet
Learn More Abot ensemble Learning: https://towardsdatascience.com/ensemble-methods-in-machine-learning-what-are-they-and-why-use-them-68ec3f9fef5f

Previous Work 3: Prophet


Prophet is a procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects. It works best with time series that have strong seasonal effects and several seasons of historical data. Prophet is robust to missing data and shifts in the trend, and typically handles outliers well.


Reference: https://www.kaggle.com/headsortails/wiki-traffic-forecast-exploration-wtf-eda
Learn more about Prophet:  https://facebook.github.io/prophet/

Previous Work 2: RNN Model


The model was rebuilt from the RNN seq2seq model, using the encoder-decoder architecture where cuDNN GRU was used to encode and TensorFlow GRUBlockCell as decoder with the output of the decoder passed on to the next step until the end of the sequence.

Previous Work 1: Arima Model

ARIMA, short for ‘AutoRegressive Integrated Moving Average’, is a forecasting algorithm based on the idea that the information in the past values of the time series can alone be used to predict future values. ARIMA models are used because they can reduce a non-stationary series to a stationary series using a sequence of differencing steps. 

Working Progress 8: Random Forest

Random Forest: It technically is an ensemble method (based on the divide-and-conquer approach) of decision trees generated on a randomly ...