Since the middle of 2010 we’ve been monitoring the level of Goldstream Creek for the National Weather Service by measuring the distance from the top of our bridge to the surface of the water or ice. In 2012 the Creek flooded and washed the bridge downstream. We eventually raised the bridge logs back up onto the banks and resumed our measurements.
This winter the Creek had been relatively quiet, with the level hovering around eight feet below the bridge. But last Friday, we awoke to more than four feet of water over the ice, and since then it's continued to rise. This morning’s reading had the ice only 3.17 feet below the surface of the bridge.
Water also entered the far side of the slough, and is making it’s way around the loop, melting the snow covering the old surface. Even as the main channel stops rising and freezes, water moves closer to the dog yard from the slough.
One of my longer commutes to work involves riding east on the Goldstream Valley trails, crossing the Creek by Ballaine Road, then riding back toward the house on the north side of the Creek. From there, I can cross Goldstream Creek again where the trail at the end of Miller Hill Road and the Miller Hill Extension trail meet, and ride the trails the rest of the way to work. That crossing is also covered with several feet of water and ice.
Yesterday one of my neighbors sent email with the subject line, “Are we doomed?,” so I took a look at the heigh data from past years. The plot below shows the height of the Creek, as measured from the surface of the bridge (click on the plot to view or download a PDF, R code used to generate the plot appears at the bottom of this post).
The orange region is the region where the Creek is flowing; between my reporting of 0% ice in spring and 100% ice-covered in fall. The data gap in July 2014 was due to the flood washing the bridge downstream. Because the bridge isn’t in the same location, the height measurements before and after the flood aren’t completely comparable, but I don’t have the data for the difference in elevation between the old and new bridge locations, so this is the best we’ve got.
The light blue line across all the plots shows the current height of the Creek (3.17 feet) for all years of data. 2012 is probably the closest year to our current situation where the Creek rose to around five feet below the bridge in early January. But really nothing is completely comparable to the situation we’re in right now. Breakup won’t come for another two or three months, and in most years, the Creek rises several feet between February and breakup.
Time will tell, of course, but here’s why I’m not too worried about it. There’s another bridge crossing several miles downstream, and last Friday there was no water on the surface, and the Creek was easily ten feet below the banks. That means that there is a lot of space within the banks of the Creek downstream that can absorb the melting water as breakup happens. I also think that there is a lot of liquid water trapped beneath the ice on the surface in our neighborhood and that water is likely to slowly drain out downstream, leaving a lot of empty space below the surface ice that can accommodate further overflow as the winter progresses. In past years of walking on the Creek I’ve come across huge areas where the top layer of ice dropped as much as six feet when the water underneath drained away. I’m hoping that this happens here, with a lot of the subsurface water draining downstream.
The Creek is always reminding us of how little we really understand what’s going on and how even a small amount of flowing water can become a huge force when that water accumulates more rapidly than the Creek can handle it. Never a dull moment!
Code
library(readr)
library(dplyr)
library(tidyr)
library(lubridate)
library(ggplot2)
library(scales)
wxcoder <- read_csv("data/wxcoder.csv", na=c("-9999"))
feb_2016_incomplete <- read_csv("data/2016_02_incomplete.csv",
na=c("-9999"))
wxcoder <- rbind(wxcoder, feb_2016_incomplete)
wxcoder <- wxcoder %>%
transmute(dte=as.Date(ymd(DATE)), tmin_f=TN, tmax_f=TX, tobs_f=TA,
tavg_f=(tmin_f+tmax_f)/2.0,
prcp_in=ifelse(PP=='T', 0.005, as.numeric(PP)),
snow_in=ifelse(SF=='T', 0.05, as.numeric(SF)),
snwd_in=SD, below_bridge_ft=HG,
ice_cover_pct=IC)
creek <- wxcoder %>% filter(dte>as.Date(ymd("2010-05-27")))
creek_w_year <- creek %>%
mutate(year=year(dte),
doy=yday(dte))
ice_free_date <- creek_w_year %>%
group_by(year) %>%
filter(ice_cover_pct==0) %>%
summarize(ice_free_dte=min(dte), ice_free_doy=min(doy))
ice_covered_date <- creek_w_year %>%
group_by(year) %>%
filter(ice_cover_pct==100, doy>182) %>%
summarize(ice_covered_dte=min(dte), ice_covered_doy=min(doy))
flowing_creek_dates <- ice_free_date %>%
inner_join(ice_covered_date, by="year") %>%
mutate(ymin=Inf, ymax=-Inf)
latest_obs <- creek_w_year %>%
mutate(rank=rank(desc(dte))) %>%
filter(rank==1)
current_height_df <- data.frame(
year=c(2011, 2012, 2013, 2014, 2015, 2016),
below_bridge_ft=latest_obs$below_bridge_ft)
q <- ggplot(data=creek_w_year %>% filter(year>2010),
aes(x=doy, y=below_bridge_ft)) +
theme_bw() +
geom_rect(data=flowing_creek_dates %>% filter(year>2010),
aes(xmin=ice_free_doy, xmax=ice_covered_doy, ymin=ymin, ymax=ymax),
fill="darkorange", alpha=0.4,
inherit.aes=FALSE) +
# geom_point(size=0.5) +
geom_line() +
geom_hline(data=current_height_df,
aes(yintercept=below_bridge_ft),
colour="darkcyan", alpha=0.4) +
scale_x_continuous(name="",
breaks=c(1,32,60,91,
121,152,182,213,
244,274,305,335,
365),
labels=c("Jan", "Feb", "Mar", "Apr",
"May", "Jun", "Jul", "Aug",
"Sep", "Oct", "Nov", "Dec",
"Jan")) +
scale_y_reverse(name="Creek height, feet below bridge",
breaks=pretty_breaks(n=5)) +
facet_wrap(~ year, ncol=1)
width <- 16
height <- 16
rescale <- 0.75
pdf("creek_heights_2010-2016_by_year.pdf",
width=width*rescale, height=height*rescale)
print(q)
dev.off()
svg("creek_heights_2010-2016_by_year.svg",
width=width*rescale, height=height*rescale)
print(q)
dev.off()
Introduction
This week a class action lawsuit was filed against FitBit, claiming that their heart rate fitness trackers don’t perform as advertised, specifically during exercise. I’ve been wearing a FitBit Charge HR since October, and also wear a Scosche Rhythm+ heart rate monitor whenever I exercise, so I have a lot of data that I can use to assess the legitimacy of the lawsuit. The data and RMarkdown file used to produce this post is available from GitHub.
The Data
Heart rate data from the Rhythm+ is collected by the RunKeeper app on my phone, and after transferring the data to RunKeeper’s website, I can download GPX files containing all the GPS and heart rate data for each exercise. Data from the Charge HR is a little harder to get, but with the proper tools and permission from FitBit, you can get what they call “intraday” data. I use the fitbit Python library and a set of routines I wrote (also available from GitHub) to pull this data.
The data includes 116 activities, mostly from commuting to and from work by bicycle, fat bike, or on skis. The first step in the process is to pair the two data sets, but since the exact moment when each sensor recorded data won’t match, I grouped both sets of data into 15-second intervals, and calculated the mean heart rate for each sensor withing that 15-second window. The result looks like this:
load("matched_hr_data.rdata")
kable(head(matched_hr_data))
dt_rounded | track_id | type | min_temp | max_temp | rhythm_hr | fitbit_hr |
---|---|---|---|---|---|---|
2015-10-06 06:18:00 | 3399 | Bicycling | 17.8 | 35 | 103.00000 | 100.6 |
2015-10-06 06:19:00 | 3399 | Bicycling | 17.8 | 35 | 101.50000 | 94.1 |
2015-10-06 06:20:00 | 3399 | Bicycling | 17.8 | 35 | 88.57143 | 97.1 |
2015-10-06 06:21:00 | 3399 | Bicycling | 17.8 | 35 | 115.14286 | 104.2 |
2015-10-06 06:22:00 | 3399 | Bicycling | 17.8 | 35 | 133.62500 | 107.4 |
2015-10-06 06:23:00 | 3399 | Bicycling | 17.8 | 35 | 137.00000 | 113.3 |
... | ... | ... | ... | ... | ... | ... |
Let’s take a quick look at a few of these activities. The squiggly lines show the heart rate data from the two sensors, and the horizontal lines show the average heart rate for the activity. In both cases, the FitBit Charge HR is shown in red and the Scosche Rhythm+ is blue.
library(dplyr)
library(tidyr)
library(ggplot2)
library(lubridate)
library(scales)
heart_rate <- matched_hr_data %>%
transmute(dt=dt_rounded, track_id=track_id,
title=paste(strftime(as.Date(dt_rounded,
tz="US/Alaska"), "%b-%d"),
type),
fitbit=fitbit_hr, rhythm=rhythm_hr) %>%
gather(key=sensor, value=hr, fitbit:rhythm) %>%
filter(track_id %in% c(3587, 3459, 3437, 3503))
activity_means <- heart_rate %>%
group_by(track_id, sensor) %>%
summarize(hr=mean(hr))
facet_labels <- heart_rate %>% select(track_id, title) %>% distinct()
hr_labeller <- function(values) {
lapply(values, FUN=function(x) (facet_labels %>% filter(track_id==x))$title)
}
r <- ggplot(data=heart_rate,
aes(x=dt, y=hr, colour=sensor)) +
geom_hline(data=activity_means, aes(yintercept=hr, colour=sensor), alpha=0.5) +
geom_line() +
theme_bw() +
scale_color_brewer(name="Sensor",
breaks=c("fitbit", "rhythm"),
labels=c("FitBit Charge HR", "Scosche Rhythm+"),
palette="Set1") +
scale_x_datetime(name="Time") +
theme(axis.text.x=element_blank(), axis.ticks.x=element_blank()) +
scale_y_continuous(name="Heart rate (bpm)") +
facet_wrap(~track_id, scales="free", labeller=hr_labeller, ncol=1) +
ggtitle("Comparison between heart rate monitors during a single activity")
print(r)
You can see that for each activity type, one of the plots shows data where the two heart rate monitors track well, and one where they don’t. And when they don’t agree the FitBit is wildly inaccurate. When I initially got my FitBit I experimented with different positions on my arm for the device but it didn’t seem to matter, so I settled on the advice from FitBit, which is to place the band slightly higher on the wrist (two to three fingers from the wrist bone) than in normal use.
One other pattern is evident from the two plots where the FitBit does poorly: the heart rate readings are always much lower than reality.
A scatterplot of all the data, plotting the FitBit heart rate against the Rhythm+ shows the overall pattern.
q <- ggplot(data=matched_hr_data,
aes(x=rhythm_hr, y=fitbit_hr, colour=type)) +
geom_abline(intercept=0, slope=1) +
geom_point(alpha=0.25, size=1) +
geom_smooth(method="lm", inherit.aes=FALSE, aes(x=rhythm_hr, y=fitbit_hr)) +
theme_bw() +
scale_x_continuous(name="Scosche Rhythm+ heart rate (bpm)") +
scale_y_continuous(name="FitBit Charge HR heart rate (bpm)") +
scale_colour_brewer(name="Activity type", palette="Set1") +
ggtitle("Comparison between heart rate monitors during exercise")
print(q)
If the FitBit device were always accurate, the points would all be distributed along the 1:1 line, which is the diagonal black line under the point cloud. The blue diagonal line shows the actual linear relationship between the FitBit and Rhythm+ data. What’s curious is that the two lines cross near 100 bpm, which means that the FitBit is underestimating heart rate when my heart is beating fast, but overestimates it when it’s not.
The color of the points indicate the type of activity for each point, and you can see that most of the lower heart rate points (and overestimation by the FitBit) come from hiking activities. Is it the type of activity that triggers over- or underestimation of heart rate from the FitBit, or is is just that all the lower heart rate activities tend to be hiking?
Another way to look at the same data is to calculate the difference between the Rhythm+ and FitBit and plot those anomalies against the actual (Rhythm+) heart rate.
anomaly_by_hr <- matched_hr_data %>%
mutate(anomaly=fitbit_hr-rhythm_hr) %>%
select(rhythm_hr, anomaly, type)
q <- ggplot(data=anomaly_by_hr,
aes(x=rhythm_hr, y=anomaly, colour=type)) +
geom_abline(intercept=0, slope=0, alpha=0.5) +
geom_point(alpha=0.25, size=1) +
theme_bw() +
scale_x_continuous(name="Scosche Rhythm+ heart rate (bpm)",
breaks=pretty_breaks(n=10)) +
scale_y_continuous(name="Difference between FitBit Charge HR and Rhythm+ (bpm)",
breaks=pretty_breaks(n=10)) +
scale_colour_brewer(palette="Set1")
print(q)
In this case, all the points should be distributed along the zero line (no difference between FitBit and Rhythm+). We can see a large bluish (fat biking) cloud around the line between 130 and 165 bpm (indicating good results from the FitBit), but the rest of the points appear to be well distributed along a diagonal line which crosses the zero line around 90 bpm. It’s another way of saying the same thing: at lower heart rates the FitBit tends to overestimate heart rate, and as my heart rate rises above 90 beats per minute, the FitBit underestimates heart rate to a greater and greater extent.
Student’s t-test and results
A Student’s t-test can be used effectively with paired data like this to judge whether the two data sets are statistically different from one another. This routine runs a paired t-test on the data from each activity, testing the null hypothesis that the FitBit heart rate values are the same as the Rhythm+ values. I’m tacking on significance labels typical in analyses like these where one asterisk indicates the results would only happen by chance 5% of the time, two asterisks mean random data would only show this pattern 1% of the time, and three asterisks mean there’s less than a 0.1% chance of this happening by chance.
One note: There are 116 activities, so at the 0.05 significance level, we would expect five or six of them to be different just by chance. That doesn’t mean our overall conclusions are suspect, but you do have to keep the number of tests in mind when looking at the results.
t_tests <- matched_hr_data %>%
group_by(track_id, type, min_temp, max_temp) %>%
summarize_each(funs(p_value=t.test(., rhythm_hr, paired=TRUE)$p.value,
anomaly=t.test(., rhythm_hr, paired=TRUE)$estimate[1]),
vars=fitbit_hr) %>%
ungroup() %>%
mutate(sig=ifelse(p_value<0.001, '***',
ifelse(p_value<0.01, '**',
ifelse(p_value<0.05, '*', '')))) %>%
select(track_id, type, min_temp, max_temp, anomaly, p_value, sig)
kable(head(t_tests))
track_id | type | min_temp | max_temp | anomaly | p_value | sig |
---|---|---|---|---|---|---|
3399 | Bicycling | 17.8 | 35.0 | -27.766016 | 0.0000000 | *** |
3401 | Bicycling | 37.0 | 46.6 | -12.464228 | 0.0010650 | ** |
3403 | Bicycling | 15.8 | 38.0 | -4.714672 | 0.0000120 | *** |
3405 | Bicycling | 42.4 | 44.3 | -1.652476 | 0.1059867 | |
3407 | Bicycling | 23.3 | 40.0 | -7.142151 | 0.0000377 | *** |
3409 | Bicycling | 44.6 | 45.5 | -3.441501 | 0.0439596 | * |
... | ... | ... | ... | ... | ... | ... |
It’s easier to interpret the results summarized by activity type:
t_test_summary <- t_tests %>%
mutate(different=grepl('\\*', sig)) %>%
select(type, anomaly, different) %>%
group_by(type, different) %>%
summarize(n=n(),
mean_anomaly=mean(anomaly))
kable(t_test_summary)
type | different | n | mean_anomaly |
---|---|---|---|
Bicycling | FALSE | 2 | -1.169444 |
Bicycling | TRUE | 26 | -20.847136 |
Fat Biking | FALSE | 15 | -1.128833 |
Fat Biking | TRUE | 58 | -14.958953 |
Hiking | FALSE | 2 | -0.691730 |
Hiking | TRUE | 8 | 10.947165 |
Skiing | TRUE | 5 | -28.710941 |
What this shows is that the FitBit underestimated heart rate by an average of 21 beats per minute in 26 of 28 (93%) bicycling trips, underestimated heart rate by an average of 15 bpm in 58 of 73 (79%) fat biking trips, overestimate heart rate by an average of 11 bpm in 80% of hiking trips, and always drastically underestimated my heart rate while skiing.
For all the data:
t.test(matched_hr_data$fitbit_hr, matched_hr_data$rhythm_hr, paired=TRUE)
## ## Paired t-test ## ## data: matched_hr_data$fitbit_hr and matched_hr_data$rhythm_hr ## t = -38.6232, df = 4461, p-value < 2.2e-16 ## alternative hypothesis: true difference in means is not equal to 0 ## 95 percent confidence interval: ## -13.02931 -11.77048 ## sample estimates: ## mean of the differences ## -12.3999
Indeed, in aggregate, the FitBit does a poor job at estimating heart rate during exercise.
Conclusion
Based on my data of more than 100 activities, I’d say the lawsuit has some merit. I only get accurate heart rate readings during exercise from my FitBit Charge HR about 16% of the time, and the error in the heart rate estimates appears to get worse as my actual heart rate increases. The advertising for these devices gives you the impression that they’re designed for high intensity exercise (showing people being very active, running, bicycling, etc.), but their performance during these activities is pretty poor.
All that said, I knew this going in when I bought my FitBit, so I’m not hugely disappointed. There are plenty of other benefits to monitoring the data from these devices (including non-exercise heart rate), and it isn’t a major inconvenience for me to strap on a more accurate heart rate monitor for those times when it actually matters.
Introduction
While riding to work this morning I figured out a way to disentangle the effects of trail quality and physical conditioning (both of which improve over the season) from temperature, which also tends to increase throughout the season. As you recall in my previous post, I found that days into the season (winter day of year) and minimum temperature were both negatively related with fat bike energy consumption. But because those variables are also related to each other, we can’t make statements about them individually.
But what if we look at pairs of trips that are within two days of each other and look at the difference in temperature between those trips and the difference in energy consumption? We’ll only pair trips going the same direction (to or from work), and we’ll restrict the pairings to two days or less. That eliminates seasonality from the data because we’re always comparing two trips from the same few days.
Data
For this analysis, I’m using SQL to filter the data because I’m better at window functions and filtering in SQL than R. Here’s the code to grab the data from the database. (The CSV file and RMarkdown script is on my GitHub repo for this analysis). The trick here is to categorize trips as being to work (“north”) or from work (“south”) and then include this field in the partition statement of the window function so I’m only getting the next trip that matches direction.
library(dplyr)
library(ggplot2)
library(scales)
exercise_db <- src_postgres(host="example.com", dbname="exercise_data")
diffs <- tbl(exercise_db,
build_sql(
"WITH all_to_work AS (
SELECT *,
CASE WHEN extract(hour from start_time) < 11
THEN 'north' ELSE 'south' END AS direction
FROM track_stats
WHERE type = 'Fat Biking'
AND miles between 4 and 4.3
), with_next AS (
SELECT track_id, start_time, direction, kcal, miles, min_temp,
lead(direction) OVER w AS next_direction,
lead(start_time) OVER w AS next_start_time,
lead(kcal) OVER w AS next_kcal,
lead(miles) OVER w AS next_miles,
lead(min_temp) OVER w AS next_min_temp
FROM all_to_work
WINDOW w AS (PARTITION BY direction ORDER BY start_time)
)
SELECT start_time, next_start_time, direction,
min_temp, next_min_temp,
kcal / miles AS kcal_per_mile,
next_kcal / next_miles as next_kcal_per_mile,
next_min_temp - min_temp AS temp_diff,
(next_kcal / next_miles) - (kcal / miles) AS kcal_per_mile_diff
FROM with_next
WHERE next_start_time - start_time < '60 hours'
ORDER BY start_time")) %>% collect()
write.csv(diffs, file="fat_biking_trip_diffs.csv", quote=TRUE,
row.names=FALSE)
kable(head(diffs))
start time | next start time | temp diff | kcal / mile diff |
---|---|---|---|
2013-12-03 06:21:49 | 2013-12-05 06:31:54 | 3.0 | -13.843866 |
2013-12-03 15:41:48 | 2013-12-05 15:24:10 | 3.7 | -8.823329 |
2013-12-05 06:31:54 | 2013-12-06 06:39:04 | 23.4 | -22.510564 |
2013-12-05 15:24:10 | 2013-12-06 16:38:31 | 13.6 | -5.505662 |
2013-12-09 06:41:07 | 2013-12-11 06:15:32 | -27.7 | -10.227048 |
2013-12-09 13:44:59 | 2013-12-11 16:00:11 | -25.4 | -1.034789 |
Out of a total of 123 trips, 70 took place within 2 days of each other. We still don’t have a measure of trail quality, so pairs where the trail is smooth and hard one day and covered with fresh snow the next won’t be particularly good data points.
Let’s look at a plot of the data.
s = ggplot(data=diffs,
aes(x=temp_diff, y=kcal_per_mile_diff)) +
geom_point() +
geom_smooth(method="lm", se=FALSE) +
scale_x_continuous(name="Temperature difference between paired trips (degrees F)",
breaks=pretty_breaks(n=10)) +
scale_y_continuous(name="Energy consumption difference (kcal / mile)",
breaks=pretty_breaks(n=10)) +
theme_bw() +
ggtitle("Paired fat bike trips to and from work within 2 days of each other")
print(s)
This shows that when the temperature difference between two paired trips is negative (the second trip is colder than the first), additional energy is required for the second (colder) trip. This matches the pattern we saw in my earlier post where minimum temperature and winter day of year were negatively associated with energy consumption. But because we’ve used differences to remove seasonal effects, we can actually determine how large of an effect temperature has.
There are quite a few outliers here. Those that are in the region with very little difference in temperature are likey due to snowfall changing the trail conditions from one trip to the next. I’m not sure why there is so much scatter among the points on the left side of the graph, but I don’t see any particular pattern among those points that might explain the higher than normal variation, and we don’t see the same variation in the points with a large positive difference in temperature, so I think this is just normal variation in the data not explained by temperature.
Results
Here’s the linear regression results for this data.
summary(lm(data=diffs, kcal_per_mile_diff ~ temp_diff))
## ## Call: ## lm(formula = kcal_per_mile_diff ~ temp_diff, data = diffs) ## ## Residuals: ## Min 1Q Median 3Q Max ## -40.839 -4.584 -0.169 3.740 47.063 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -2.1696 1.5253 -1.422 0.159 ## temp_diff -0.7778 0.1434 -5.424 8.37e-07 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 12.76 on 68 degrees of freedom ## Multiple R-squared: 0.302, Adjusted R-squared: 0.2917 ## F-statistic: 29.42 on 1 and 68 DF, p-value: 8.367e-07
The model and coefficient are both highly signficant, and as we might expect, the intercept in the model is not significantly different from zero (if there wasn’t a difference in temperature between two trips there shouldn’t be a difference in energy consumption either, on average). Temperature alone explains 30% of the variation in energy consumption, and the coefficient tells us the scale of the effect: each degree drop in temperature results in an increase in energy consumption of 0.78 kcalories per mile. So for a 4 mile commute like mine, the difference between a trip at 10°F vs −20°F is an additional 93 kilocalories (30 × 0.7778 × 4 = 93.34) on the colder trip. That might not sound like much in the context of the calories in food (93 kilocalories is about the energy in a large orange or a light beer), but my average energy consumption across all fat bike trips to and from work is 377 kilocalories so 93 represents a large portion of the total.
Introduction
I’ve had a fat bike since late November 2013, mostly using it to commute the 4.1 miles to and from work on the Goldstream Valley trail system. I used to classic ski exclusively, but that’s not particularly pleasant once the temperatures are below 0°F because I can’t keep my hands and feet warm enough, and the amount of glide you get on skis declines as the temperature goes down.
However, it’s also true that fat biking gets much harder the colder it gets. I think this is partly due to biking while wearing lots of extra layers, but also because of increased friction between the large tires and tubes in a fat bike. In this post I will look at how temperature and other variables affect the performance of a fat bike (and it’s rider).
The code and data for this post is available on GitHub.
Data
I log all my commutes (and other exercise) using the RunKeeper app, which uses the phone’s GPS to keep track of distance and speed, and connects to my heart rate monitor to track heart rate. I had been using a Polar HR chest strap, but after about a year it became flaky and I replaced it with a Scosche Rhythm+ arm band monitor. The data from RunKeeper is exported into GPX files, which I process and insert into a PostgreSQL database.
From the heart rate data, I estimate energy consumption (in kilocalories, or what appears on food labels as calories) using a formula from Keytel LR, et al. 2005, which I talk about in this blog post.
Let’s take a look at the data:
library(dplyr)
library(ggplot2)
library(scales)
library(lubridate)
library(munsell)
fat_bike <- read.csv("fat_bike.csv", stringsAsFactors=FALSE, header=TRUE) %>%
tbl_df() %>%
mutate(start_time=ymd_hms(start_time, tz="US/Alaska"))
kable(head(fat_bike))
start_time | miles | time | hours | mph | hr | kcal | min_temp | max_temp |
---|---|---|---|---|---|---|---|---|
2013-11-27 06:22:13 | 4.17 | 0:35:11 | 0.59 | 7.12 | 157.8 | 518.4 | -1.1 | 1.0 |
2013-11-27 15:27:01 | 4.11 | 0:35:49 | 0.60 | 6.89 | 156.0 | 513.6 | 1.1 | 2.2 |
2013-12-01 12:29:27 | 4.79 | 0:55:08 | 0.92 | 5.21 | 172.6 | 951.5 | -25.9 | -23.9 |
2013-12-03 06:21:49 | 4.19 | 0:39:16 | 0.65 | 6.40 | 148.4 | 526.8 | -4.6 | -2.1 |
2013-12-03 15:41:48 | 4.22 | 0:30:56 | 0.52 | 8.19 | 154.6 | 434.5 | 6.0 | 7.9 |
2013-12-05 06:31:54 | 4.14 | 0:32:14 | 0.54 | 7.71 | 155.8 | 463.2 | -1.6 | 2.9 |
There are a few things we need to do to the raw data before analyzing it. First, we want to restrict the data to just my commutes to and from work, and we want to categorize them as being one or the other. That way we can analyze trips to ABR and home separately, and we’ll reduce the variation within each analysis. If we were to analyze all fat biking trips together, we’d be lumping short and long trips, as well as those with a different proportion of hills or more challenging conditions. To get just trips to and from work, I’m restricting the distance to trips between 4.0 and 4.3 miles, and only those activities where there were two of them in a single day (to work and home from work). To categorize them into commutes to work and home, I filter based on the time of day.
I’m also calculating energy per mile, and adding a “winter day of year” variable (wdoy), which is a measure of how far into the winter season the trip took place. We can’t just use day of year because that starts over on January 1st, so we subtract the number of days between January and May from the date and get day of year from that. Finally, we split the data into trips to work and home.
I’m also excluding the really early season data from 2015 because the trail was in really poor condition.
fat_bike_commute <- fat_bike %>%
filter(miles>4, miles<4.3) %>%
mutate(direction=ifelse(hour(start_time)<10, 'north', 'south'),
date=as.Date(start_time, tz='US/Alaska'),
wdoy=yday(date-days(120)),
kcal_per_mile=kcal/miles) %>%
group_by(date) %>%
mutate(n=n()) %>%
ungroup() %>%
filter(n>1)
to_abr <- fat_bike_commute %>% filter(direction=='north',
wdoy>210)
to_home <- fat_bike_commute %>% filter(direction=='south',
wdoy>210)
kable(head(to_home %>% select(-date, -kcal, -n)))
start_time | miles | time | hours | mph | hr | min_temp | max_temp | direction | wdoy | kcal_per_mile |
---|---|---|---|---|---|---|---|---|---|---|
2013-11-27 15:27:01 | 4.11 | 0:35:49 | 0.60 | 6.89 | 156.0 | 1.1 | 2.2 | south | 211 | 124.96350 |
2013-12-03 15:41:48 | 4.22 | 0:30:56 | 0.52 | 8.19 | 154.6 | 6.0 | 7.9 | south | 217 | 102.96209 |
2013-12-05 15:24:10 | 4.18 | 0:29:07 | 0.49 | 8.60 | 150.7 | 9.7 | 12.0 | south | 219 | 94.13876 |
2013-12-06 16:38:31 | 4.17 | 0:26:04 | 0.43 | 9.60 | 154.3 | 23.3 | 24.7 | south | 220 | 88.63309 |
2013-12-09 13:44:59 | 4.11 | 0:32:06 | 0.54 | 7.69 | 161.3 | 27.5 | 28.5 | south | 223 | 119.19708 |
2013-12-11 16:00:11 | 4.19 | 0:33:48 | 0.56 | 7.44 | 157.6 | 2.1 | 4.5 | south | 225 | 118.16229 |
Analysis
Here a plot of the data. We’re plotting all trips with winter day of year on the x-axis and energy per mile on the y-axis. The color of the points indicates the minimum temperature and the straight line shows the trend of the relationship.
s <- ggplot(data=fat_bike_commute %>% filter(wdoy>210), aes(x=wdoy, y=kcal_per_mile, colour=min_temp)) +
geom_smooth(method="lm", se=FALSE, colour=mnsl("10B 7/10", fix=TRUE)) +
geom_point(size=3) +
scale_x_continuous(name=NULL,
breaks=c(215, 246, 277, 305, 336),
labels=c('1-Dec', '1-Jan', '1-Feb', '1-Mar', '1-Apr')) +
scale_y_continuous(name="Energy (kcal)", breaks=pretty_breaks(n=10)) +
scale_colour_continuous(low=mnsl("7.5B 5/12", fix=TRUE), high=mnsl("7.5R 5/12", fix=TRUE),
breaks=pretty_breaks(n=5),
guide=guide_colourbar(title="Min temp (°F)", reverse=FALSE, barheight=8)) +
ggtitle("All fat bike trips") +
theme_bw()
print(s)
Across all trips, we can see that as the winter progresses, I consume less energy per mile. This is hopefully because my physical condition improves the more I ride, and also because the trail conditions also improve as the snow pack develops and the trail gets harder with use. You can also see a pattern in the color of the dots, with the bluer (and colder) points near the top and the warmer temperature trips near the bottom.
Let’s look at the temperature relationship:
s <- ggplot(data=fat_bike_commute %>% filter(wdoy>210), aes(x=min_temp, y=kcal_per_mile, colour=wdoy)) +
geom_smooth(method="lm", se=FALSE, colour=mnsl("10B 7/10", fix=TRUE)) +
geom_point(size=3) +
scale_x_continuous(name="Minimum temperature (degrees F)", breaks=pretty_breaks(n=10)) +
scale_y_continuous(name="Energy (kcal)", breaks=pretty_breaks(n=10)) +
scale_colour_continuous(low=mnsl("7.5PB 2/12", fix=TRUE), high=mnsl("7.5PB 8/12", fix=TRUE),
breaks=c(215, 246, 277, 305, 336),
labels=c('1-Dec', '1-Jan', '1-Feb', '1-Mar', '1-Apr'),
guide=guide_colourbar(title=NULL, reverse=TRUE, barheight=8)) +
ggtitle("All fat bike trips") +
theme_bw()
print(s)
A similar pattern. As the temperature drops, it takes more energy to go the same distance. And the color of the points also shows the relationship from the earlier plot where trips taken later in the season require less energy.
There is also be a correlation between winter day of year and temperature. Since the winter fat biking season essentially begins in December, it tends to warm up throughout.
Results
The relationship between winter day of year and temperature means that we’ve got multicollinearity in any model that includes both of them. This doesn’t mean we shouldn’t include them, nor that the significance or predictive power of the model is reduced. All it means is that we can’t use the individual regression coefficients to make predictions.
Here are the linear models for trips to work, and home:
to_abr_lm <- lm(data=to_abr, kcal_per_mile ~ min_temp + wdoy)
print(summary(to_abr_lm))
## ## Call: ## lm(formula = kcal_per_mile ~ min_temp + wdoy, data = to_abr) ## ## Residuals: ## Min 1Q Median 3Q Max ## -27.845 -6.964 -3.186 3.609 53.697 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 170.81359 15.54834 10.986 1.07e-14 *** ## min_temp -0.45694 0.18368 -2.488 0.0164 * ## wdoy -0.29974 0.05913 -5.069 6.36e-06 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 15.9 on 48 degrees of freedom ## Multiple R-squared: 0.4069, Adjusted R-squared: 0.3822 ## F-statistic: 16.46 on 2 and 48 DF, p-value: 3.595e-06
to_home_lm <- lm(data=to_home, kcal_per_mile ~ min_temp + wdoy)
print(summary(to_home_lm))
## ## Call: ## lm(formula = kcal_per_mile ~ min_temp + wdoy, data = to_home) ## ## Residuals: ## Min 1Q Median 3Q Max ## -21.615 -10.200 -1.068 3.741 39.005 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 144.16615 18.55826 7.768 4.94e-10 *** ## min_temp -0.47659 0.16466 -2.894 0.00570 ** ## wdoy -0.20581 0.07502 -2.743 0.00852 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 13.49 on 48 degrees of freedom ## Multiple R-squared: 0.5637, Adjusted R-squared: 0.5455 ## F-statistic: 31.01 on 2 and 48 DF, p-value: 2.261e-09
The models confirm what we saw in the plots. Both regression coefficients are negative, which means that as the temperature rises (and as the winter goes on) I consume less energy per mile. The models themselves are significant as are the coefficients, although less so in the trips to work. The amount of variation in kcal/mile explained by minimum temperature and winter day of year is 41% for trips to work and 56% for trips home.
What accounts for the rest of the variation? My guess is that trail conditions are the missing factor here; specifically fresh snow, or a trail churned up by snowmachiners. I think that’s also why the results are better on trips home than to work. On days when we get snow overnight, I am almost certainly riding on an pristine snow-covered trail, but by the time I leave work, the trail will be smoother and harder due to all the traffic it’s seen over the course of the day.
Conclusions
We didn’t really find anything surprising here: it is significantly harder to ride a fat bike when it’s colder. Because of conditioning, improved trail conditions, as well as the tendency for warmer weather later in the season, it also gets easier to ride as the winter goes on.

1991 contacts
Yesterday I was going through my journal books from the early 90s to see if I could get a sense of how much bicycling I did when I lived in Davis California. I came across the list of my network contacts from January 1991 shown in the photo. I had an email, bitnet and uucp address on the UC Davis computer system. I don’t have any record of actually using these, but I do remember the old email clients that required lines be less than 80 characters, but which were unable to edit lines already entered.
I found the statistics for 109 of my bike rides between April 1991 and June 1992, and I think that probably represents most of them from that period. I moved to Davis in the fall of 1990 and left in August 1993, however, and am a little surprised I didn’t find any rides from those first six months or my last year in California.
I rode 2,671 miles in those fifteen months, topping out at 418 miles in June 1991. There were long gaps in the record where I didn’t ride at all, but when I rode, my average weekly mileage was 58 miles and maxed out at 186 miles.
To put that in perspective, in the last seven years of commuting to work and riding recreationally, my highest monthly mileage was 268 miles (last month!), my average weekly mileage was 38 miles, and the farthest I’ve gone in a week was 81 miles.
The road biking season is getting near to the end here in Fairbanks as the chances of significant snowfall on the roads rises dramatically, but I hope that next season I can push my legs (and hip) harder and approach some of the mileage totals I reached more than twenty years ago.