This vignette provides a guide to performing cross validation with modelr and maxcovr. Cross validation is a bit of a strange concept in this context - we aren’t trying to make predictions here, but what we are trying to do is appropriately evaluate the performance of maxcovr in a fairer way.
In the future if this is a feature that is under demand, I will incorporate cross validation into maxcovr. In the mean time, here’s a vignette.
We will stick with the previous example using york and york_crime
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
# subset to be the places with towers built on them.
york_selected <- york %>% filter(grade == "I")
york_unselected <- york %>% filter(grade != "I")
Thanks to the modelr
package, it is relatively
straightforward to perform cross validation.
# first we partition the data into 5 folds
library(modelr)
library(purrr)
library(tibble)
mc_cv <- modelr::crossv_kfold(york_crime, 5) %>%
# we change the test and train sets from the `resample`
# to tibbles
mutate(test = map(test,as_tibble),
train = map(train,as_tibble))
This creates a dataframe with test and training sets
## # A tibble: 5 × 3
## train test .id
## <named list> <named list> <chr>
## 1 <tibble [1,451 × 12]> <tibble [363 × 12]> 1
## 2 <tibble [1,451 × 12]> <tibble [363 × 12]> 2
## 3 <tibble [1,451 × 12]> <tibble [363 × 12]> 3
## 4 <tibble [1,451 × 12]> <tibble [363 × 12]> 4
## 5 <tibble [1,452 × 12]> <tibble [362 × 12]> 5
We then fit the model on the training set using map_df
,
from the purrr
package.
library(purrr)
# then we fit the model
system.time(
mc_cv_fit <- map_df(mc_cv$train,
~max_coverage(existing_facility = york_selected,
proposed_facility = york_unselected,
user = ., # training set goes here
n_added = 20,
distance_cutoff = 100))
)
## user system elapsed
## 2.658 0.107 2.766
Then we can use the summary_mc_cv
function to extract
out the summaries from each fold. This summary takes the facilities
placed using the training set of users, and then takes the test set of
users and counts what percent of these are being covered by the training
model.
n_added | n_fold | distance_within | n_cov | pct_cov | n_not_cov | pct_not_cov | dist_avg | dist_sd |
---|---|---|---|---|---|---|---|---|
20 | 1 | 100 | 40 | 0.1101928 | 323 | 0.8898072 | 1062.751 | 1319.256 |
20 | 2 | 100 | 72 | 0.1983471 | 291 | 0.8016529 | 1115.713 | 1521.413 |
20 | 3 | 100 | 43 | 0.1184573 | 320 | 0.8815427 | 1225.803 | 1517.459 |
20 | 4 | 100 | 46 | 0.1267218 | 317 | 0.8732782 | 1153.920 | 1599.603 |
20 | 5 | 100 | 40 | 0.1104972 | 322 | 0.8895028 | 1305.993 | 1592.741 |
Eyeballing the values, it looks like the pct coverage stays around 10%, but we can plot it to get a better idea. We can overlay the coverage obtained using the full dataset to get an idea of how we are performing.
library(ggplot2)
summarised_cv %>%
ggplot(aes(x = n_fold,
y = pct_cov)) +
geom_point() +
geom_line(group = 1) +
theme_minimal()
Here we see that the pct_coverage doesn’t seem to change much across the folds.
Coming up next, we will explore how to perform cross validation as we increase the number of facilities added.
Ideally, there should be a way to do this using purrr, so we don’t have to fic 5 separate models, but perhaps this will change when we enable n_added to take a vector of values.
# then we fit the model
system.time(
mc_cv_fit_n20 <- map_df(mc_cv$train,
~max_coverage(existing_facility = york_selected,
proposed_facility = york_unselected,
user = ., # training set goes here
n_added = 20,
distance_cutoff = 100))
)
## user system elapsed
## 2.218 0.057 2.274
system.time(
mc_cv_fit_n40 <- map_df(mc_cv$train,
~max_coverage(existing_facility = york_selected,
proposed_facility = york_unselected,
user = ., # training set goes here
n_added = 40,
distance_cutoff = 100))
)
## user system elapsed
## 2.278 0.036 2.315
system.time(
mc_cv_fit_n60 <- map_df(mc_cv$train,
~max_coverage(existing_facility = york_selected,
proposed_facility = york_unselected,
user = ., # training set goes here
n_added = 60,
distance_cutoff = 100))
)
## user system elapsed
## 2.424 0.028 2.452
system.time(
mc_cv_fit_n80 <- map_df(mc_cv$train,
~max_coverage(existing_facility = york_selected,
proposed_facility = york_unselected,
user = ., # training set goes here
n_added = 80,
distance_cutoff = 100))
)
## user system elapsed
## 2.463 0.028 2.491
system.time(
mc_cv_fit_n100 <- map_df(mc_cv$train,
~max_coverage(existing_facility = york_selected,
proposed_facility = york_unselected,
user = ., # training set goes here
n_added = 100,
distance_cutoff = 100))
)
## user system elapsed
## 30.386 0.036 30.425
summarised_cv_n20 <- summary_mc_cv(mc_cv_fit_n20, mc_cv)
summarised_cv_n40 <- summary_mc_cv(mc_cv_fit_n40, mc_cv)
summarised_cv_n60 <- summary_mc_cv(mc_cv_fit_n60, mc_cv)
summarised_cv_n80 <- summary_mc_cv(mc_cv_fit_n80, mc_cv)
summarised_cv_n100 <- summary_mc_cv(mc_cv_fit_n100, mc_cv)
bound_testing_summaries <- bind_rows(summarised_cv_n20,
summarised_cv_n40,
summarised_cv_n60,
summarised_cv_n80,
summarised_cv_n100) %>%
mutate(type = "test")
It looks like the more facilities we add, the better the coverage…mostly.
bound_training_summaries <- bind_rows(mc_cv_fit_n20$model_coverage,
mc_cv_fit_n40$model_coverage,
mc_cv_fit_n60$model_coverage,
mc_cv_fit_n80$model_coverage,
mc_cv_fit_n100$model_coverage) %>%
mutate(type = "training")
bound_all_summaries <- bind_rows(bound_testing_summaries,
bound_training_summaries)
ggplot(bound_testing_summaries,
aes(x = n_fold,
y = pct_cov,
colour = factor(n_added),
group = factor(n_added))) +
geom_point() +
geom_line() +
theme_minimal()
Let’s look at this another way, with boxplots for the number of facilities added.
ggplot(bound_testing_summaries,
aes(x = factor(n_added),
y = n_cov)) +
geom_boxplot() +
theme_minimal()
We can also compare the % coverage for the test and training datasets