--- title: "Stepwise Search" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Stepwise Search} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` ### Mathematical Background A common problem in building statistical models is determining which features to include in a model. Mathematical publications provide some suggestions, but there is no consensus. Some examples are the lasso or simply trying all possible combinations of predictors. Another option is stepwise search. The more parameters a model has, the better it will fit the data. If the model is too complex, the worse it will perform on unseen data. AIC strikes a balance between fitting the training data well and keeping the model simple enough to perform well on unseen data. Using AIC, a search starts with no features. $$g(Y) = \beta_0$$ Then each feature is considered. If there are 10 features, there are 10 models under consideration. For each model, AIC is calculated and the model with the lowest AIC is selected. In this case, X1 was selected. $$g(Y) = \beta_1X_1 + \beta_0$$ After the first feature is selected, all remaining 9 features are considered. Of the 9 features, the one with the lowest AIC is selected, creating a 2 feature model. In this round, X3 was selected. $$g(Y) = \beta_3X_3 + \beta_1X_1 + \beta_0$$ When adding more features does not lower AIC, the procedure stops. ### Simulation Setup How well does stepwise search work when there are unrelated variables? Is a large amount of data needed to find the correct model? The below tests stepwise search in two settings to answer these questions. ### Easy Problem: Large N and half the variables are unrelated ```{r setup} library(GlmSimulatoR) library(MASS) # Creating data to work with set.seed(1) simdata <- simulate_inverse_gaussian( N = 100000, link = "1/mu^2", weights = c(1, 2, 3), unrelated = 3 ) # Setting the simplest model and the most complex model. scope_arg <- list( lower = Y ~ 1, upper = Y ~ X1 + X2 + X3 + Unrelated1 + Unrelated2 + Unrelated3 ) # Run search starting_model <- glm(Y ~ 1, data = simdata, family = inverse.gaussian(link = "1/mu^2") ) glm_search <- stepAIC(starting_model, scope_arg, trace = 0) summary(glm_search) rm(simdata, scope_arg, glm_search, starting_model) ``` Looking at the summary, the correct model was found. Stepwise search worked perfectly. ### Hard Problem: Small N and most variables are unrelated ```{r} # Creating data to work with set.seed(4) simdata <- simulate_inverse_gaussian( N = 1000, link = "1/mu^2", weights = c(1, 2, 3), unrelated = 20 ) # Setting the simplest model and the most complex model. scope_arg <- list( lower = Y ~ 1, upper = Y ~ X1 + X2 + X3 + Unrelated1 + Unrelated2 + Unrelated3 + Unrelated4 + Unrelated5 + Unrelated6 + Unrelated7 + Unrelated8 + Unrelated9 + Unrelated10 + Unrelated11 + Unrelated12 + Unrelated13 + Unrelated14 + Unrelated15 + Unrelated16 + Unrelated17 + Unrelated18 + Unrelated19 + Unrelated20 ) # Run search starting_model <- glm(Y ~ 1, data = simdata, family = inverse.gaussian(link = "1/mu^2") ) glm_search <- stepAIC(starting_model, scope_arg, trace = 0) summary(glm_search) rm(simdata, scope_arg, glm_search, starting_model) ``` All predictive features and a few unrelated features were selected. Considering the number of features and the low sample size, the search worked well.