MAJOR UPDATE
As of Fall 2021, this wiki has been discontinued and is no longer being actively developed.
All updated materials and announcements for the QCBS R Workshop Series are now housed on the QCBS R Workshop website. Please update your bookmarks accordingly to avoid outdated material and/or broken links.
Thank you for your understanding,
Your QCBS R Workshop Coordinators.
QCBS R Workshops
This series of 10 workshops walks participants through the steps required to use R for a wide array of statistical analyses relevant to research in biology and ecology. These open-access workshops were created by members of the QCBS both for members of the QCBS and the larger community.
The content of this workshop has been peer-reviewed by several QCBS members. If you would like to suggest modifications, please contact the current series coordinators, listed on the main wiki page
Workshop 4: Linear models
Developed by: Catherine Baltazar, Bérenger Bourgeois, Zofia Taranu, Shaun Turney, Willian Vieira
Summary: In this workshop, you will learn how to implement basic linear models commonly used in ecology in R such as simple regression, analysis of variance (ANOVA), analysis of covariance (ANCOVA), and multiple regression. After verifying visually and statistically the assumptions of these models and transforming your data when necessary, the interpretation of model outputs and the plotting of your final model will no longer keep secrets from you!
Link to new Rmarkdown presentation
Link to old Prezi presentation
Download the R script and data for this lesson:
Learning Objectives
- Simple linear regression
- T-test
- ANOVA
- Two-way ANOVA
- Unbalanced ANOVA
(advanced section/ optional) - ANCOVA
- Multiple linear regression
1. Overview
1.1 Defining mean and variation
Scientists have always been interested in determining relationships between variables. In this workshop we will learn how to use linear models, a set of models that quantify relationships between variables.
To begin, we will define some important concepts that are central to linear models: mean and variation. Mean is a measure of the average value of a population. Suppose we have a random variable x, for example the height of the people in this room, and we would like to see some patterns of this variable. The first way we will present it will be by using the mean (be aware that there are many ways of measuring it):
But the mean alone will not represent it a population fully. We can also describe the population using measures of variation. Variation is the spread around the mean. For example, whether all people in the room are approximately the same height (low variation) or if there are many tall and short people (high variation). Mean deviation, variance, standard deviation and the coefficient of deviation are all measures of variation which we will define below. We can measure deviation of each element from the mean:
With the deviation for each value, we can calculate the mean deviation:
To convert all variables to positive values without the need of absolute values, we can also square the value. And that is where the variance comes from:
However, by squaring each value, our variables are no longer in meaningful units. Back to our example with the height of people in this room, the variance will be $ mˆ2 $ which is not what we’re measuring. To convert the value to meaningful units, we can calculate the standard deviation:
Finally, to express the coefficient of variation, known as the relative standard deviation, expressed in percentage, we have:
1.2 Linear models
In linear models, we use the concepts of mean and variation to describe the relationship between two variables. Linear models are so named because they describe the relationship between variables as lines:
where
is the response variable,
is the intercept of the regression line,
is the coefficient of variation for the first explanatory variable,
is the coefficient of variation for the pth explanatory variable,
is the first explanatory variable,
is the pth explanatory variable,
are the residuals of the model
The response variable is the variable you want to explain. It's also known as the dependent variable. There is only one response variable. The explanatory variables are the variables you think may explain your response variable. They're also known as independent variables. There can be one or many explanatory variables. For example, suppose we want to explain variation in the height of people in a room. Height is the response variable. Some possible explanatory variables could be gender or age.
In linear models the response variable must be continuous, while the explanatory variables can be continuous or categorical. A continuous variable has an infinite number of possible values. A categorical variable has a limited number of possible values. Age, temperature, and latitude are all continuous variables. Sex, developmental stage, and country are all categorical variables. For continuous explanatory variables, the linear model tests whether there is a significant correlation between the explanatory and response variable. For categorical explanatory variables, the linear model tests whether there is a significant difference between the different levels (groups) in their mean value of the response variable. This should become clearer as we learn about specific types of linear models in the sections below.
In almost all cases, the explanatory variables will not explain all of the variation in the response variable. Gender and age, for example, will not be enough to predict everyone's height perfectly. The remaining, unexplained variation is called error or residuals.
The goal of the linear model is to find the best estimation of the parameters (the β variables) and then assess the goodness of fit of the model. Several methods have been developed to calculate the intercept and coefficients of linear models, and the appropriate choice depends on the model. The general concept behind these methods is that the residuals are minimized.
Depending on the kind of explanatory variables considered and their number, different statistical tools can be used to assess these relationships. The table below lists the five types of statistical analysis that will be covered in this workshop:
Statistical analysis | Type of response variable Y | Type of explanatory variable X | Number of explanatory variables | Number of levels k |
---|---|---|---|---|
Simple linear regression | Continuous | Continuous | 1 | |
t-test | Categorical | 1 | 2 | |
ANOVA | Categorical | 1 (one-way ANOVA), 2 (two- way ANOVA) or more | 3 or more | |
ANCOVA | Continuous AND categorical | 2 or more | 2 or more | |
Multiple regression | Continuous | 2 or more |
1.3 Linear model assumptions
To be valid, a linear models must meet 4 assumptions, otherwise the model results cannot be safely interpreted.
- The residuals are independent
- The residuals are normally distributed
- The residuals have a mean of 0
- The residuals are homoskedastic (they have constant variance)
Note that all of these assumptions concern the residuals, not the response or explanatory variables. The residuals must be independent, meaning that there isn't an underlying structure missing from the model (usually spatial or temporal autocorrelation). The residuals must be normally distributed with a mean of 0, meaning that the largest proportion of residuals have a value close to 0 (ie, the error is small) and the distribution is symmetrical (ie, the response variable is overestimated and underestimated equally often). The residuals must be homoskedastic, meaning that error doesn't change much as the value of the predictor variables change.
In the following sections, we do not always explicitly restate the above assumptions for every model. Be aware, however, that these assumption are implicit in all linear models, including all models presented below.
1.4 Test statistics and p-values
Once you've run your model in R, you will receive a model output that includes many numbers. It takes practice to understand what each of these numbers means and which to pay the most attention to. The model output includes the estimation of the parameters (the β variables). The output also includes test statistics. The particular test statistic depends on the linear model you are using (t is the test statistic for the linear regression and the t test, and F is the test statistic for ANOVA).
In linear models, the null hypothesis is typically that there is no relationship between two continuous variables, or that there is no difference in the levels of a categorical variable. The larger the absolute value of the test statistic, the more improbable that the null hypothesis is true. The exact probability is given in the model output and is called the p-value. You could think of the p-value as the probability that the null hypothesis is true, although that's a bit of a simplification. (Technically, the p-value is the probability that, given the assumption that the null hypothesis is true, the test statistic would be the same as or of greater magnitude than the actual observed test statistic.) By convention, we consider that if the p value is less than 0.05 (5%), then we reject the null hypothesis. This cut-off value is called α (alpha). If we reject the null hypothesis then we say that the alternative hypothesis is supported: there is a significant relationship or a significant difference. Note that we do not “prove” hypotheses, only support or reject them.
1.5 Work flow
Below we will explore several kinds of linear models. The way you create and interpret each model will differ in the specifics, but the principles behind them and the general work flow will remain the same. For each model we will work through the following steps:
- Visualize the data (data visualization could also come later in your work flow)
- Create a model
- Test the model assumptions
- Adjust the model if assumptions are violated
- Interpret the model results
2. Simple linear regression
Simple linear regression is a type of linear model which contains a single, continuous explanatory variable. The regression tests whether there is a significant correlation between the two variables.
Simple linear regression involves two parameters which must be estimated: an intercept () and a coefficient of correlation (). Ordinary least squares is the most widely used estimation method, and also corresponds to the default method of the lm
function in R. Ordinary least squares fits a line such that the sum of the squared vertical distances between the observed data and the linear regression model (i.e. the residuals) is minimized.
Click below to see the math in more detail.
2.1 Running a linear model
Using the bird dataset, we will first examine the linear regression of maximum abundance as a function of mass.
In R, linear regression is implemented using the lm function from the stats package:
lm (y ~ x)
Note: before using a new function in R, users should refer to its help documentation (?functionname ) to find out how to use the function as well as its preset default methods. |
- | Load and explore your data
# Loading libraries and bird dataset library(e1071) library(MASS) setwd("~/Desktop/...") # Don't forget to set your working directory (note: your directory will be different) bird<-read.csv("birdsdiet.csv") # Visualize the dataframe names(bird) str(bird) head(bird) summary(bird) plot(bird)
The bird dataset contains 7 variables:
Variable Name | Description | Type |
---|---|---|
Family | Common name of family | String |
MaxAbund | The highest observed abundance at any site in North America | Continuous/ numeric |
AvgAbund | The average abundance across all sites where found in NA | Continuous/ numeric |
Mass | The body size in grams | Continuous/ numeric |
Diet | Type of food consumed | Discrete – 5 levels (Plant; PlantInsect; Insect; InsectVert; Vertebrate) |
Passerine | Is it a songbird/ perching bird | Boolean (0/1) |
Aquatic | Is it a bird that primarily lives in/ on/ next to the water | Boolean (0/1) |
Note that Family, Diet, Passerine, and Aquatic are all categorical variables although they are encoded in different ways (string, discrete, boolean).
We are now ready to run our linear model:
- | Regression of Maximum Abundance on Mass
lm1 <- lm(bird$MaxAbund ~ bird$Mass) # where Y ~ X means Y "as a function of" X>
2.2 Verifying assumptions
- | Diagnostic plots
opar <- par(mfrow=c(2,2)) # draws subsequent figures in a 2-by-2 panel plot(lm1) par(opar) # resets to 1-by-1 plot
Verifying independence
Linear models can only be applied to independent data. This means that the yi at a given xi value must not be influenced by other xi values. Violation of independence can happen if your data represent some form of dependence structure, such as spatial or temporal correlation.
There is no simple diagnostic plot for independence, unfortunately. Instead, you must consider your data carefully. Is there some underlying structure in your data that makes your data points dependent on each other? If you collect data from the same sites over time (ie, a time series) or if you collect multiple data points from the same organism, your data violates the assumption of independence. You will need to use a different type of model instead.
Verifying residual variance is constant and residual mean is 0
Residual vs Fitted plot - The first graph of the diagnostic plots is called by plot(lm1)
. This plot illustrates the spread of the residuals between each fitted values. Each point represents the distance of the response variable from the model prediction of the response variable. If the residuals spread randomly around the 0 line, this indicates that the relationship is linear and that the mean of the residuals is 0. If the residuals form an approximate horizontal band around the 0 line, this indicates homogeneity of error variance (ie, it is homoskedastic). If the residuals form a funnel shape, this indicates the residuals are not homoskedastic.
Scale-location plot - The third graph of the diagnostic plots enables one to verify whether the residual spread increases with a given fitted values (i.e. identifies whether the spread in the residuals is due to the selected explanatory variable). If the spread increases, the homoscedasticity assumption is not respected.
Verifying that residuals are normally distributed
QQ plot - Normality can be assessed from the QQplot of the diagnostic plots. This graph compares the probability distribution of the model residuals to the probability distribution of normal data series. If the standardized residuals lie linearly on the 1:1 line of the QQplot, the residuals can be considered normally distributed.
The points of the QQplot are nonlinear, which suggests that the residuals are not normally distributed.
Checking for high leverage
In addition to the assumption testing above, we are also interested in whether any of our data points have high leverage. This is not assumption testing per se, but it will affect our interpretation of the data. If some of the observations in a dataset possess strongly different values from others, a model fitting problem can arise such that these high leverage data influence the model calculation.
Residuals vs Leverage plot - High leverage data can be visualised on the fourth diagnostic plots (i.e. residuals vs leverage), which identifies the observation numbers of the high leverage data point(s). If (and only if!) these observations correspond to mis-measurements or represent exceptions, they can be removed from the original dataset.
2.3 Normalizing data
In the example provided above, the model residuals were not normally distributed and therefore the assumption of residual normality is violated. We may still be able to use a linear regression model if we can address this violation. The next step is to try to normalize the variables using transformations. Often if we can make the explanatory and/or response variables normally distributed then the model residuals will become normally distributed. In addition to QQ-plots we can assess the normality of a variable by drawing a histogram using the function hist
, and check visually whether the data series appears to follow a normal distribution. For example:
- | Testing Normality: hist() function
# Plot Y ~ X and the regression line plot(bird$MaxAbund ~ bird$Mass, pch=19, col="coral", ylab="Maximum Abundance", xlab="Mass") abline(lm1, lwd=2) ?plot # For further details on plot() arguments # see colours() for list of colours # Is the data normally distributed? hist(bird$MaxAbund,col="coral", main="Untransformed data", xlab="Maximum Abundance") hist(bird$Mass, col="coral", main="Untransformed data", xlab="Mass")
A third way to assess normality is to use the Shapiro-Wilk normality test that compares the distribution of the observed data series to a normal distribution using the function shapiro.test
.
The null and alternate hypotheses of this test are:
H0: the observed data series is normally distributed,
H1: the observed data series is not normally distributed,
The observed data series can be considered normally distributed when the p-value calculated by the Shapiro-Wilk normality test is greater than or equal to α, typically set to 0.05.
- Testing Normality: shapiro.test() function
# Test null hypothesis that the sample came from a normally distributed population shapiro.test(bird$MaxAbund) shapiro.test(bird$Mass) # If p < 0.05, then distribution is not-normal # if p > 0.05, then distribution is normal
We can also evaluate the skewness of each distribution using the Skewness
function:
- Testing Normality: skewness() function
skewness(bird$MaxAbund) skewness(bird$Mass) # where positive values indicate a left-skewed distribution, and negative value a right skew.
The histograms, Shapiro tests and Skewness all indicate that the variables need to be transformed to normalize (e.g. a log10 transformation).
2.4 Data transformation
In case of non-normality, response and explanatory variables can be transformed to enhance their normality following these rules:
Type of distribution | Transformation | R function |
---|---|---|
Moderately positive skewness | sqrt(x) | |
Substantially positive skewness | log10(x) | |
Substantially positive skewness | log10(x + C) where C is a constant added to each value of x so that the smallest score is 1 |
|
Moderately negative skewness | sqrt(K - x) where K is a constant subtracted from each value of x so that the smallest score is 1 |
|
Substantially negative skewness | log10(K - x) |
Thus, log10 transformations should be applied and saved in the bird data frame. The model can then be re-run, verified and interpreted.
- | Data Transformation
# Add log10() transformed variables to your dataframe bird$logMaxAbund <- log10(bird$MaxAbund) bird$logMass <- log10(bird$Mass) names(bird) # to view the dataframe + new transformed variables hist(bird$logMaxAbund,col="yellowgreen", main="Log transformed", xlab=expression("log"[10]*"(Maximum Abundance)")) hist(bird$logMass,col="yellowgreen", main="Log transformed", xlab=expression("log"[10]*"(Mass)")) shapiro.test(bird$logMaxAbund); skewness(bird$logMaxAbund) shapiro.test(bird$logMass); skewness(bird$logMass) # Re-run your analysis with the appropriate transformations lm2 <- lm(bird$logMaxAbund ~ bird$logMass) # Are there remaining problems with the diagnostics (heteroscedasticity, non-independence, high leverage)? opar <- par(mfrow=c(2,2)) plot(lm2, pch=19, col="gray") par(opar)
2.5 Model output
Once all these assumptions have been verified, the model results can be interpreted. These results are called in R using the function summary
.
- | Summary output
# Now we can look at the model coefficients and p-values summary(lm2) # You can also just call up the coefficients of the model lm2$coef # What else? str(summary(lm2)) summary(lm2)$coefficients # where Std. Error is the standard error of each estimate summary(lm2)$r.squared # Coefficient of determination summary(lm2)$adj.r.squared # Adjusted coefficient of determination summary(lm2)$sigma # Residual standard error (square root of Error Mean Square) # etc… # You can also check for yourself the equation for R2: SSE = sum(resid(lm2)^2) SST = sum((bird$logMaxAbund - mean(bird$logMaxAbund))^2) R2 = 1 - ((SSE)/SST) R2
The output of this function presents all the results of your validated model:
lm(formula = logMaxAbund ~ logMass, data = bird) Residuals: Min 1Q Median 3Q Max -1.93562 -0.39982 0.05487 0.40625 1.61469 Estimate Std. Error t value Pr(>|t|) (Intercept) 1.6724 0.2472 6.767 1.17e-08 *** logMass -0.2361 0.1170 -2.019 0.0487 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.6959 on 52 degrees of freedom Multiple R-squared: 0.07267, Adjusted R-squared: 0.05484 F-statistic: 4.075 on 1 and 52 DF, p-value: 0.04869
The coefficients of the regression model and their associated standard error appear in the second and third columns of the regression table, respectively. Thus,
β0 = 1.6724 ± 0.2472 is the intercept (± se) of the regression model,
β1 = -0.2361 ± 0.1170 is the slope (± se) of the regression model.
and finally: logMaxAbund = 1.6724 (± 0.2472) - 0.2361 (± 0.1170) x logMass
The t-value and their associated p-value (in the fourth and fifth columns of the regression table, respectively) test for a significant difference between the calculated coefficients and zero. In this case, we can see that logMass has a significant influence on logMaxAbund because the p-value associated with the slope of the regression model is inferior to 0.05. Moreover, these two variables are negatively related as the slope of the regression model is negative.
Users must, however, be aware that significant relationship between two variables does not always imply causality. Conversely, the absence of significant linear regression between y and x does not always imply an absence of relationship between these two variables; this is for example the case when a relationship is not linear.
The goodness of fit of the linear regression model is assessed from the adjusted-R2 (here, 0.05484). This value is a measure of the proportion of variation explained by the model.
Click below to see the math in more detail.
The higher the adjusted-R2 is, the better the data fit the statistical model, knowing that this coefficient varies between 0 and 1. In this case, the relationship between logMaxAbund and logMass is quite weak.
The last line of the R output represents the F-statistic of the model and its associated p-value. If this p-value is inferior to 0.05, the model explains the data relationship better than a null model.
2.6 Plotting
Linear regression results are generally represented by a plot of the response variable as a function of the explanatory variable on which the regression line is added (and if needed the confidence intervals), using the R code:
- | Plot Y ~ X with regression line and CI
plot(logMaxAbund ~ logMass, data=bird, pch=19, col="yellowgreen", ylab = expression("log"[10]*"(Maximum Abundance)"), xlab = expression("log"[10]*"(Mass)")) abline(lm2, lwd=2) # You may also flag the previously identified high-leveraged points points(bird$logMass[32], bird$logMaxAbund[32], pch=19, col="violet") points(bird$logMass[21], bird$logMaxAbund[21], pch=19, col="violet") points(bird$logMass[50], bird$logMaxAbund[50], pch=19, col="violet") # We can also plot the confidence intervals confit<-predict(lm2,interval="confidence") points(bird$logMass,confit[,2]) points(bird$logMass,confit[,3])
2.7 Subsetting
We may also run the analysis on a subset of observations, for example, on terrestrial birds only.
- | Regression on Subset of Observations
# Recall that you can exclude objects using "!" # We can analyze a subset of this data using this subset command in lm() lm3 <- lm(logMaxAbund ~ logMass, data=bird, subset =! bird$Aquatic) # removing the Aquatic birds # or equivalently lm3 <- lm(logMaxAbund ~ logMass, data=bird, subset=bird$Aquatic == 0) # Examine the model opar <- par(mfrow=c(2,2)) plot(lm3, pch=19, col=rgb(33,33,33,100,maxColorValue=225)) summary(lm3) par(opar) # Compare the two datasets opar <- par(mfrow=c(1,2)) plot(logMaxAbund ~ logMass, data=bird, main="All birds", ylab = expression("log"[10]*"(Maximum Abundance)"), xlab = expression("log"[10]*"(Mass)")) abline(lm2,lwd=2) plot(logMaxAbund ~ logMass, data=bird, subset=!bird$Aquatic, main="Terrestrial birds", ylab = expression("log"[10]*"(Maximum Abundance)"), xlab = expression("log"[10]*"(Mass)")) abline(lm3,lwd=2) opar(par)
CHALLENGE 1
Examine the relationship between log10(MaxAbund) and log10(Mass) for passerine birds.
HINT:
Passerine is coded 0 and 1 just like Aquatic. You can verify this by viewing the structure str(bird)
.
3. ANOVA
Analysis of Variance (ANOVA) is a type of linear model for a continuous response variable and one or more categorical explanatory variables. The categorical explanatory variables can have any number of levels (groups). For example, the variable “colour” might have three levels: green, blue, and yellow. ANOVA tests whether the means of the response variable differ between the levels. For example, if blueberries differ in their mass depending on their colour.
ANOVA calculations are based on the sum of squares partitioning and compares the within-treatment variance to the between-treatment variance. If the between-treatment variance is greater than the within-treatment variance, this means that the treatments affect the explanatory variable more than the random error (corresponding to the within-treatment variance), and that the explanatory variable is likely to be significantly influenced by the treatments.
In the ANOVA, the comparison of the between-treatment variance to the within-treatment variance is made through the calculation of the F-statistic that correspond to the ratio of the mean sum of squares of the treatment (MSTrt) on the mean sum of squares of the error (MSE). These two last terms are obtained by dividing their two respective sums of squares by their corresponding degrees of freedom, as is typically presented in a ANOVA table (click to see below). Finally, the p-value of the ANOVA is calculated from the F-statistic that follows a Chi-square (χ2) distribution.
Click to see the math in more detail below.
3.1 Types of ANOVA
- One-way ANOVA
One categorical explanatory variable with 2 or more levels. If there are 2 levels a t-test can be used alternatively. - Two-way ANOVA (see section below)
- 2 categorical explanatory variables or more,
- Each categorical explanatory variable can have multiple levels,
- The interactions between each categorical explanatory variable must be tested. - Repeated measures
ANOVA can be used for repeated measures, but we won't cover this today. Linear Mixed-effect Models can also be used for this kind of data (see Workshop 6).
3.2 T-test
When you have a single explanatory variable with only two levels, you can run a student's T-test to test for a difference in the mean of the two levels. If appropriate for your data, you can choose to test a unilateral hypothesis. This means that you can test the more specific assumption that one level has a higher mean than the other, rather than that they simply have different means.
Click to see the math in more detail below.
Note that the t-test is mathematically equivalent to a one-way ANOVA with 2 levels.
Assumptions
If the assumptions of the t-test are not met, the test can give misleading results. Here are some important things to note when testing the assumptions of a t-test.
- Normality of data
As with simple linear regression, the residuals need to be normally distributed. If the data are not normally distributed, but have reasonably symmetrical distributions, a mean which is close to the centre of the distribution, and only one mode (highest point in the frequency histogram) then a t-test will still work as long as the sample is sufficiently large (rule of thumb ~30 observations). If the data is heavily skewed, then we may need a very large sample before a t-test works. In such cases, an alternate non-parametric test should be used. - Homoscedasticity
Another important assumption of the two-sample t-test is that the variance of your two samples are equal. This allows you to calculate a pooled variance, which in turn is used to calculate the standard error. If population variances are unequal, then the probability of a Type I error is greater than α.
The robustness of the t-test increases with sample size and is higher when groups have equal sizes.
We can test for difference in variances among two populations and ask what is the probability of taking two samples from two populations having identical variances and have the two sample variances be as different as are s12 and s22.
To do so, we must do the variance ratio test (i.e. an F-test).
Violation of assumptions
If variances between groups are not equal, it is possible to use corrections, like the Welch correction. If assumptions cannot be respected, the non-parametric equivalent of t-test is the Mann-Whitney test. Finally, if the two groups are not independent (e.g. measurements on the same individual at 2 different years), you should use a Paired t-test.
Running a t-test
In R, t-tests are implemented using the function t.test
. For example, to test for a mass difference between aquatic and non-aquatic birds, you should write:
- | T-test
# T-test boxplot(logMass ~ Aquatic, data=bird, ylab=expression("log"[10]*"(Bird Mass)"), names=c("Non-Aquatic","Aquatic"), col=c("yellowgreen","skyblue")) # First, let's test the assumption of equal variance # Note: we do not need to test the assumption of normally distributed data since # we already log transformed the data above tapply(bird$logMass,bird$Aquatic,var) var.test(logMass~Aquatic,data=bird) # We are now ready to run the t-test ttest1 <- t.test(Mass~Aquatic, var.equal=TRUE, data=bird) # or equivalently ttest1 <- t.test(x=bird$logMass[bird$Aquatic==0], y=bird$logMass[bird$Aquatic==1], var.equal=TRUE) ttest1
Two Sample t-test data: logMass by Aquatic t = -7.7707, df = 52, p-value = 2.936e-10 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -1.6669697 -0.9827343 sample estimates: mean of x mean of y 1.583437 2.908289
Here, we show that the ratio of variances is not statistically different from 1, therefore variances are equal, and we proceeded with our t-test. Since p < 0.05, the hypothesis of no difference between the two bird types (Aquatic vs. terrestrial) was rejected.
Unilateral t-test
The alternative option of the t.test
function allows for the use of unilateral t-test. For example, if users want to test if non-aquatic birds are less heavy than aquatic birds, the function can be written:
- | Unilateral t-test
# Alternative T-test uni.ttest1 <- t.test(logMass~Aquatic, var.equal=TRUE, data=bird, alternative="less") uni.ttest1
In the R output, called by uni.ttest1
, the results of the t-test appear in the third line:
Two Sample t-test data: logMass by Aquatic t = -7.7707, df = 52, p-value = 1.468e-10 alternative hypothesis: true difference in means is less than 0 95 percent confidence interval: -Inf -1.039331 sample estimates: mean in group 0 mean in group 1 1.583437 2.908289
In this case, the calculated t-statistic is t = -7.7707 with df = 52 degrees of freedom that gives a p-value of p-value = 1.468e-10. As the calculated p-value is inferior to 0.05, the null hypothesis is rejected. Thus, aquatic birds are significantly heavier than non-aquatic birds.
Running a t-test with lm()
A t-test is a linear model and a specific case of ANOVA with one factor with 2 levels. As such, we can also run the t-test with the lm()
function in R:
- | T-test as a linear model
ttest.lm1 <- lm(logMass ~ Aquatic, data=bird) anova(ttest.lm1)
Analysis of Variance Table Response: logMass Df Sum Sq Mean Sq F value Pr(>F) Aquatic 1 19.015 19.0150 60.385 2.936e-10 *** Residuals 52 16.375 0.3149 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
When variances are equal (i.e., two-sample t-test), we can show that t2 = F:
3.3 Running an ANOVA
The t-test is only for a single categorical explanatory variable with 2 levels. For all other linear models with categorical explanatory variables we use ANOVA. First, let's visualize the data using boxplot()
. Recall that by default, R will order you groups in alphabetical order. We can reorder the groups according to the median of each Diet level.
Another way to graphically view the effect sizes is to use plot.design()
. This function will illustrate the levels of a particular factor along a vertical line, and the overall value of the response is drawn as a horizontal line.
- | ANOVA
# Default alphabetical order boxplot(logMaxAbund ~ Diet, data=bird) # Relevel factors med <- sort(tapply(bird$logMaxAbund, bird$Diet, median)) boxplot(logMaxAbund ~ factor(Diet, levels=names(med)), data=bird, col=c("white","lightblue1", "skyblue1","skyblue3","skyblue4")) plot.design(logMaxAbund ~ Diet, data=bird, ylab = expression("log"[10]*"(Maximum Abundance)"))
Let's now run the ANOVA. In R, ANOVA can be called either directly with the aov
function, or with the anova
function performed on a linear model previously implemented with lm
:
- ANOVA in R
# Using aov() aov1 <- aov(logMaxAbund ~ Diet, data=bird) summary(aov1) # Using lm() anov1 <- lm(logMaxAbund ~ Diet, data=bird) anova(anov1)
3.4 Verifying assumptions
As with the simple linear regression and t-test, ANOVA must meet the four assumptions of linear models. Below are some tips in how to test these assumptions for an ANOVA.
- Normal distribution
The residuals of ANOVA model can once again be visualised in the normal QQplot. If the residuals lie linearly on the 1:1 line of the QQplot, they can be considered as normally distributed. If not, the ANOVA results cannot be interpreted. - Homoscedasticity
To be valid, ANOVA must be performed on models with homogeneous variance of the residuals. This homoscedasticity can be verified using either the residuals vs fitted plot or the scale-location plot of the diagnostic plots. If these plots present equivalent spread of the residuals for each of the fitted values, then the residuals variance can be considered homogeneous.
A second way to assess the homogeneity of residuals variance is to perform a Bartlett test on the anova model using the functionbartlett.test
. If the p-value of this test is superior to 0.05, the null hypothesis H0: s12 = s22 =… = sj2 =… = sn2 is accepted and the homoscedasticity assumption is respected.
Usual transformations of explanatory variables can be used if the homogeneity of residuals variance is not met. - Additivity
In addition to the assumption testing, it is important to consider whether the effects of two factors are additive. The effects are additive if the effect of one factor remains constant over all levels of the other factor, and that each factor influences the response variable independently of the other factor(s).
If assumptions are violated your can try to transform your data, which could potentially equalize variances and normalize residuals, and can convert a multiplicative effect into an additive effect. Or, if you can't (or don't want to) transform your data, the non-parametric equivalent of ANOVA is Kruskal-Wallis test.
- Model diagnostics
# Plot for diagnostics opar <- par(mfrow=c(2,2)) plot(anov1) par(opar) # Test assumption of normality of residuals shapiro.test(resid(anov1)) # Test assumption of homogeneity of variance bartlett.test(logMaxAbund ~ Diet, data=bird)
Ideally the first diagnostic plot should show similar scatter for each Diet level. The Shapiro and Bartlett tests are both non-significant, therefore residuals are assumed to be normally distributed and variances are assumed to be equal.
3.5 Model output
Once your ANOVA model has been validated, its results can be interpreted. The R output of ANOVA model depends of the function that has been used to implement the ANOVA. If the aov
function is used to implement the ANOVA model
aov1 <- aov(logMaxAbund ~ Diet, data=bird)
the results of the ANOVA can be visualized using the function
summary(aov1)
On the other hand, if lm()
is used
anov1 <- lm(logMaxAbund ~ Diet, data=bird)
the ANOVA results must be called using the function
anova(anov1)
In both cases, the R output is as follows:
Df Sum Sq Mean Sq F value Pr(>F) Diet 4 5.106 1.276 2.836 0.0341 * Residuals 49 22.052 0.450 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
This R output corresponds exactly to the ANOVA table of your model. This output also present the degrees of freedom, the sum of squares, the mean sum of squares and the F-value previously explained. For this example, the diet significantly influences the abundance of birds as the p-value is inferior to 0.05. The null hypothesis can then be rejected meaning that at least one of the diet treatments influenced the abundance differently than the other treatments.
3.6 Complementary test
Importantly, ANOVA cannot identify which treatment is different from the others in terms of response variable. It can only identify that a difference is present. To determine the location of the difference(s), post-hoc tests that compare the levels of the explanatory variables (i.e. the treatments) two by two, must be performed. While several post-hoc tests exist (e.g. Fischer’s least significant difference, Duncan’s new multiple range test, Newman-Keuls method, Dunnett’s test, etc.), the Tukey’s range test is used in this example using the function TukeyHSD
as follows:
- Post-hoc Tukey Test
# Where does the Diet difference lie? TukeyHSD(aov(anov1),ordered=T) # or equivalently TukeyHSD(aov1,ordered=T)
The R output for this test gives a table containing all the two by two comparisons of the explanatory variable levels and identify which treatment differ from the others:
Tukey multiple comparisons of means 95% family-wise confidence level factor levels have been ordered Fit: aov(formula = anov1) $Diet diff lwr upr p adj Vertebrate-InsectVert 0.3364295 -1.11457613 1.787435 0.9645742 Insect-InsectVert 0.6434334 -0.76550517 2.052372 0.6965047 Plant-InsectVert 0.8844338 -1.01537856 2.784246 0.6812494 PlantInsect-InsectVert 1.0657336 -0.35030287 2.481770 0.2235587 Insect-Vertebrate 0.3070039 -0.38670951 1.000717 0.7204249 Plant-Vertebrate 0.5480043 -0.90300137 1.999010 0.8211024 PlantInsect-Vertebrate 0.7293041 0.02128588 1.437322 0.0405485 Plant-Insect 0.2410004 -1.16793813 1.649939 0.9884504 PlantInsect-Insect 0.4223003 -0.19493574 1.039536 0.3117612 PlantInsect-Plant 0.1812999 -1.23473664 1.597336 0.9961844
In this case, the only significant difference in abundance occurs between the PlantInsect diet and the Vertebrate diet.
3.7 Plotting
After having verified the assumptions of your ANOVA model, interpreted the ANOVA table and differentiated the effect of the treatments using post-hoc tests or contrasts, the ANOVA results can be graphically illustrated using a barplot
. This shows the response variable as a function of the explanatory variable levels, where standard errors can be superimposed on each bar as well as the different letters representing the treatment group (according to the post-hoc test).
- Barplot
# Graphical illustration of ANOVA model using barplot() sd <- tapply(bird$logMaxAbund,list(bird$Diet),sd) means <- tapply(bird$logMaxAbund,list(bird$Diet),mean) n <- length(bird$logMaxAbund) se <- 1.96*sd/sqrt(n) bp <- barplot(means, col=c("white","lightblue1","skyblue1","skyblue3","skyblue4"), ylab = expression("log"[10]*"(Maximum Abundance)"), xlab="Diet", ylim=c(0,1.8)) # Add vertical se bars segments(bp, means - se, bp, means + se, lwd=2) # and horizontal lines segments(bp - 0.1, means - se, bp + 0.1, means - se, lwd=2) segments(bp - 0.1, means + se, bp + 0.1, means + se, lwd=2)
3.8 Contrasts (advanced section/ optional)
4. Two-way ANOVA
In the above section, the ANOVA models had a single categorical variable. We can create ANOVA models with multiple categorical explanatory variables. When there are two categorical explanatory variables, we refer to the model as a two-way ANOVA. A two-way ANOVA tests several hypotheses: that there is no difference in mean among levels of variable A; that there is no difference in mean among levels of variable B; and that there is no interaction between variables A and B. A significant interaction means the mean value of the response variable for each level of variable A changes depending on the level of B. For example, perhaps relationship between the colour of a fruit and its mass will depend on the plant species: if so, we say there is an interaction between colour and species.
Click to see the math in more detail below.
4.1 Running a two-way ANOVA
In R, a two-way ANOVA model is implemented in the same fashion as a one-way ANOVA using the function lm
.
CHALLENGE 2
Examine the effects of the factors Diet, Aquatic, and their interaction on the maximum bird abundance.
Recall: Before interpreting the ANOVA results, the model must first be validated by verifying the statistical assumptions of ANOVA, namely the:
- Normal distribution of the model residuals
- Homoscedasticty of the residuals variance
This verification can be done using the four diagnostic plots as previously explained for one-way ANOVA.
4.2 Interaction plot
Interactions can also be viewed graphically using the function interaction.plot
as:
- Interaction Plot
interaction.plot(bird$Diet, bird$Aquatic, bird$logMaxAbund, col="black", ylab = expression("log"[10]*"(Maximum Abundance)"), xlab="Diet")
What do the gaps in the line for the Aquatic group mean?
- Unbalanced design
table(bird$Diet, bird$Aquatic)
0 1 Insect 14 6 InsectVert 1 1 Plant 2 0 PlantInsect 17 1 Vertebrate 5 7
The design is unbalanced; unequal observations among diet levels for Aquatic (coded as 1) and Terrestrial (coded as 0). See advanced section below for details on unbalanced ANOVA designs.
CHALLENGE 3
Test the significance of the Aquatic factor by comparing nested models with and without this categorical variable.
5. Unbalanced ANOVA (advanced section/ optional)
6. ANCOVA
Analysis of covariance (ANCOVA) is a linear model that tests the influence of one categorical explanatory variable (or more) and one continuous explanatory variable (or more) on a continuous response variable. Each level of the categorical variable is described by its own slope and intercept. In addition to testing if the response variable differs for at least one level of the categorical variable, ANCOVA also tests whether the response variable might be influenced by its relationship with the continuous variable (called the covariate in ANCOVA), and by any differences between group levels in the way that the continuous variable influences the response (i.e. the interaction). The ANCOVA hypotheses are thus: that there is no difference in the mean among levels of the categorical variable; there is no correlation between the response variable and the continuous explanatory variable; there is no interaction between the categorical and continuous explanatory variables.
6.1 Assumptions
As with models seen above, to be valid ANCOVA models must meet the statistical assumptions of linear models that can be verified using diagnostic plots. In addition, ANCOVA models must have:
- The same value range for all covariates
- Variables that are fixed
- No interaction between categorical and continuous variables
Note: A fixed variable is one that you are specifically interested in (i.e. bird mass). In contrast, a random variable is noise that you want to control for (i.e. site a bird was sampled in). If you have random variables, see the workshop on Linear Mixed-effect Models!
6.2 Types of ANCOVA
You can have any number of factors and/or covariates, but as their number increases, the interpretation of results gets more complex.
The most frequently used ANCOVAs are those with:
- one covariate and one factor
- one covariate and two factors
- two covariates and one factor
The different possible goals of the ANCOVA are to determine the effects of:
- the categorical and continuous variables on the response variable
- the categorical variable(s) on the response variable(s) after removing the effect of the continuous variable
- the categorical variable(s) on the relationship between the continuous variables(s) and the response variable
Importantly, these goals are only met if there is no significant interaction between the categorical and continuous variables! Examples of significant interactions between the categorical and continuous variables (for an ANCOVA with one factor and one covariate) are illustrated by the second and third panels below:
The same logic follows for ANCOVAs with multiple categorical and/or continuous variables.
6.3 Running an ANCOVA
Running an ANCOVA in R is comparable to running a two-way ANOVA, using the function lm
. However, instead of using two categorical variables (Diet and Aquatic), we now use one categorical and one continuous variable.
For example, using a build in dataset called CO2, where the response variable is uptake, the continuous variable is conc and the factor is Treatment, the ANCOVA is:
- ANCOVA example
ancova.example <- lm(uptake ~ conc*Treatment, data=CO2) anova(ancova.example)
If only your categorical variable is significant, drop your continuous variable from the model: you will then have an ANOVA.
If only your continuous variable is significant, drop your categorical variable from the model, you will then have a simple linear regression.
If your interaction is significant, you might want to test which levels of your categorical variables ha(s)ve different slopes and to question whether ANCOVA is the most appropriate model.
In the CO2 example above, both the continuous and categorical variable are significant, but the interaction is non-significant. If your replace Treatment with Type, however, you will see an example of a significant interaction.
If you want to compare means across factors, you can use adjusted means, which uses the equations given by the ANCOVA to estimate the means of each level of the categorical variable, corrected for the effect of the categorical variable:
- Adjusted means
install.packages("effects") library(effects) adj.means <- effect('Treatment', ancova.example) plot(adj.means) adj.means <- effect('conc*Treatment', ancova.example) plot(adj.means)
CHALLENGE 4
Run an ANCOVA to test the effect of Diet, Mass, and their interaction on MaxAbund.
7. Multiple regression
Multiple regression tests the effects of several continuous explanatory variables on a response variable.
7.1 Assumptions
In addition to the usual assumptions of linear models, it is important to test for orthogonality because it will affect model interpretation. Variables are not orthogonal when explanatory variables are collinear. If one explanatory variable is correlated to another, they are likely to explain the same variability of the response variable, and the effect of one variable will be masked by the other.
If you see any pattern between two explanatory variables, they are collinear. Collinearity must be avoided as the effect of each explanatory variable will be confounded! Possible solutions are:
- Keep only one of the collinear variables,
- Try multidimensional analysis (see workshop 9),
- Try a pseudo-orthogonal analysis.
Collinearity between explanatory variables can be assessed based on the variance inflation factor using the vif
function of package ‘HH’:
- Variance Inflation Factor
vif(clDD ~ clFD + clTmi + clTma + clP + grass, data=Dickcissel)
which gives the following output:
clFD clTmi clTma clP grass 13.605855 9.566169 4.811837 3.196599 1.165775
As variance inflation factor higher than 5 represents collinear variables, the R output shows that clDD, clFD and clTmi are highly collinear. Only one of these explanatory variables can thus be retained in the final regression model.
7.2 Dickcissel dataset
The Dickcissel dataset explores environmental variables that drive the abundance and presence/ absence of a grassland bird with peak abundances in Kansas, USA. It contains 15 variables:
Variable Name | Description | Type |
---|---|---|
abund | The number of individuals observed at each route | Continuous/ numeric |
Present | Presence/ absence of the species | Boolean (“Present”/ “Absent”) |
broadleaf, conif, crop, grass, shrub, urban, wetland | Land use variables within 20 km radius of the center route | Continuous/ numeric |
NDVI | Vegetation index (a measure of productivity) | Interger |
clDD, clFD, clTma, clTmi, clP | Climate date (DD = degree days, FD = frost days, Tma = max temperature, Tmi = min temperature, P = precipitation) | Continuous/ numeric |
In R, multiple regression are implemented using the lm
function and its results are viewed using the summary
function. Using, for example, the Dickcissel data, we can test the effects of climate, productivity and land cover on the abundance of the Dickcissel species abundance by applying the model:
CHALLENGE 5
Is a transformation needed for the response variable abund?
As you likely noticed in Challenge 5, the abund variable could not be normalized, suggesting that we might need to relax the assumptions of a normally distributed response variable and move on to Generalized Linear Models, but that will wait until later!
For now, let's simply use the untransformed abund and compare the relative importance of the three variables (climate, productivity, and land cover) on abund
- Multiple Regression
lm.mult <- lm(abund ~ clTma + NDVI + grass, data=Dickcissel) summary(lm.mult)
The R output enables one to visualize the significant explanatory variables:
lm(formula = abund ~ clTma + NDVI + grass, data = Dickcissel) Residuals: Min 1Q Median 3Q Max -35.327 -11.029 -4.337 2.150 180.725 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -83.60813 11.57745 -7.222 1.46e-12 *** clTma 3.27299 0.40677 8.046 4.14e-15 *** NDVI 0.13716 0.05486 2.500 0.0127 * grass 10.41435 4.68962 2.221 0.0267 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 22.58 on 642 degrees of freedom Multiple R-squared: 0.117, Adjusted R-squared: 0.1128 F-statistic: 28.35 on 3 and 642 DF, p-value: < 2.2e-16
In this case, the three explanatory variables significantly influence the abundance of the Dickcissel species, the most significant one being the climate (p-value=4.14e-15). Altogether these variables explain 11.28% of the Dickcissel abundance variability (Adjusted R-squared= 0.1128). The overall model is also significant and explains the Dickcissel abundance variability better than a null model (p-value: < 2.2e-16).
A plot of the response variable as a function of each explanatory variable can be used to represent graphically the model results:
plot(abund ~ clTma, data=Dickcissel, pch=19, col="orange") plot(abund ~ NDVI, data=Dickcissel, pch=19, col="skyblue") plot(abund ~ grass, data=Dickcissel, pch=19, col="green")
7.3 Polynomial regression (advanced section/ optional)
7.4 Stepwise regression
8. Variance partitioning (advanced section/ optional)
Go further!
Amazing! You are now ready to perform your own regression, ANOVA and ANCOVA! But never forget to correctly specify your model and verify its statistical assumptions before interpreting its results according to the ecological background of your data.
Some exciting books about linear regression and ANOVA:
- Myers RH - Classical and Modern Regression with Application
- Gotelli NJ - A Primer of Ecological Statistics
- r_workshop4.txt
- Last modified: 2021/10/13 16:04
- by lsherin