Fix-it Facilitation: additional resources

A super fun process exploring how empirical contributions can reshape and embrace theory by addressing gaps in better designs and clear interpretations of findings.

Fix-it Felix: advances in testing plant facilitation as a restoration tool in Applied Vegetation Science.

The original contribution was longer with a more complete set of resources. Here is the full citation list that framed and supported the story and discussion.

Literature cited

Badano, E.I., Bustamante, R.O., Villarroel, E., Marquet, P.A. & Cavieres, L.A. 2015. Facilitation by nurse plants regulates community invasibility in harsh environments. Journal of Vegetation Science: 756-767.

Badano, E.I., Samour-Nieva, O.R., Flores, J., Flores-Flores, J.L., Flores-Cano, J.A. & Rodas-Ortíz, J.P. 2016. Facilitation by nurse plants contributes to vegetation recovery in human-disturbed desert ecosystems. Journal of Plant Ecology 9: 485-497.

Barney, J.N. 2016. Invasive plant management must be driven by a holistic understanding of invader impacts. Applied Vegetation Science 19: 183-184.

Bertness, M.D. & Callaway, R. 1994. Positive interactions in communities. Trends in Ecology and Evolution 9: 191-193.

Bronstein, J.L. 2009. The evolution of facilitation and mutualism. Journal of Ecology 97: 1160-1170.

Bruno, J.F., Stachowicz, J.J. & Bertness, M.D. 2003. Inclusion of facilitation into ecological theory. Trends in Ecology and Evolution 18: 119-125.

Bulleri, F., Bruno, J.F., Silliman, B.R. & Stachowicz, J.J. 2016. Facilitation and the niche: implications for coexistence, range shifts and ecosystem functioning. Functional Ecology 30: 70-78.

Callaway, R.M. 1998. Are positive interactions species-specific? Oikos 82: 202-207.

Chamberlain, S.A., Bronstein, J.L. & Rudgers, J.A. 2014. How context dependent are species interactions? Ecology Letters 17: 881-890.

Filazzola, A. & Lortie, C.J. 2014. A systematic review and conceptual framework for the mechanistic pathways of nurse plants. Global Ecology and Biogeography 23: 1335-1345.

Gomez-Aparicio, L., Zamora, R., Gomez, J.M., Hodar, J.A., Castro, J. & Baraza, E. 2004. Applying plant facilitation to forest restoration: a meta-analysis of the use of shrubs as nurse plants. Ecological Applications 14: 1128-1138.

Holmgren, M. & Scheffer, M. 2010. Strong facilitation in mild environments: the stress gradient hypothesis revisited. Journal of Ecology 98: 1269-1275.

James, J.J., Rinella, M.J. & Svejcar, T. 2012. Grass Seedling Demography and Sagebrush Steppe Restoration. Rangeland Ecology & Management 65: 409-417.

Lortie, C.J., Filazzola, A., Welham, C. & Turkington, R. 2016. A cost–benefit model for plant–plant interactions: a density-series tool to detect facilitation. Plant Ecology: 1-15.

Macek, P., Schöb, C., Núñez-Ávila, M., Hernández Gentina, I.R., Pugnaire, F.I. & Armesto, J.J. 2017. Shrub facilitation drives tree establishment in a semiarid fog-dependent ecosystem. Applied Vegetation Science.

Malanson, G.P. & Resler, L.M. 2015. Neighborhood functions alter unbalanced facilitation on a stress gradient. Journal of Theoretical Biology 365: 76-83.

McIntire, E. & Fajardo, A. 2011. Facilitation within species: a possible origin of group-selected superoorganisms. American Naturalist 178: 88-97.

McIntire, E.J.B. & Fajardo, A. 2014. Facilitation as a ubiquitous driver of biodiversity. New Phytologist 201: 403-416.

Michalet, R., Brooker, R.W., Cavieres, L.A., Kikvidze, Z., Lortie, C.J., Pugnaire, F.I., Valiente‐Banuet, A. & Callaway, R.M. 2006. Do biotic interactions shape both sides of the humped‐back model of species richness in plant communities? Ecology Letters 9: 767-773.

Michalet, R., Le Bagousse-Pinguet, Y., Maalouf, J.-P. & Lortie, C.J. 2014. Two alternatives to the stress-gradient hypothesis at the edge of life: the collapse of facilitation and the switch from facilitation to competition. Journal of Vegetation Science 25: 609-613.

Noumi, Z., Chaieb, M., Michalet, R. & Touzard, B. 2015. Limitations to the use of facilitation as a restoration tool in arid grazed savanna: a case study. Applied Vegetation Science 18: 391-401.

O’Brien, M.J., Pugnaire, F.I., Armas, C., Rodríguez-Echeverría, S. & Schöb, C. 2017. The shift from plant–plant facilitation to competition under severe water deficit is spatially explicit. Ecology and Evolution 7: 2441-2448.

Pescador, D.S., Chacón-Labella, J., de la Cruz, M. & Escudero, A. 2014. Maintaining distances with the engineer: patterns of coexistence in plant communities beyond the patch-bare dichotomy. New Phytologist 204: 140-148.

Rydgren, K., Hagen, D., Rosef, L., Pedersen, B. & Aradottir, A.L. 2017. Designing seed mixtures for restoration on alpine soils: who should your neighbours be? Applied Vegetation Science.

Sheley, R.L. & James, J.J. 2014. Simultaneous intraspecific facilitation and interspecific competition between native and annual grasses. Journal of Arid Environments 104: 80-87.

Silliman, B.R., Schrack, E., He, Q., Cope, R., Santoni, A., van der Heide, T., Jacobi, R., Jacobi, M. & van de Koppel, J. 2015. Facilitation shifts paradigms and can amplify coastal restoration efforts. Proceedings of the National Academy of Sciences 112: 14295-14300.

Stachowicz, J.J. 2001. Mutualism, facilitation, and the structure of ecological communities. Bioscience 51: 235-246.

von Gillhaussen, P., Rascher, U., Jablonowski, N.D., Plückers, C., Beierkuhnlein, C. & Temperton, V.M. 2014. Priority Effects of Time of Arrival of Plant Functional Groups Override Sowing Interval or Density Effects: A Grassland Experiment. PLoS ONE 9: e86906.

Went, F.W. 1942. The dependence of certain annual plants on shrubs in southern California deserts. Bulletin of the Torrey Botanical Club 69: 100-114.

Xiao, S. & Michalet, R. 2013. Do indirect interactions always contribute to net indirect facilitation? Ecological Modelling 268: 1-8.

Overdispersion tests in #rstats

A brief note on overdispersion


Poisson distribution assume variance is equal to the mean.

Quasi-poisson model assumes variance is a linear function of mean.

Negative binomial model assumes variance is a quadratic function of the mean.

rstats implementation

#to test you need to fit a poisson GLM then apply function to this model


dispersiontest(object, trafo = NULL, alternative = c(“greater”, “two.sided”, “less”))

trafo = 1 is linear testing for quasipoisson or you can fit linear equation to trafo as well


c = 0 equidispersion

c > 0 is overdispersed


  1. Function description from vignette for AER package.
  2. Excellent StatsExchange description of interpretation.

A note on AIC scores for quasi-families in #rstats

A summary note on recent set of #rstats discoveries in estimating AIC scores to better understand a quasipoisson family in GLMS relative to treating data as poisson.

Conceptual GLM workflow rules/guidelines

  1. Data are best untransformed. Fit better model to data.
  2. Select your data structure to match purpose with statistical model.
  3. Use logic and understanding of data not AIC scores to select best model.

(1) Typically, the power and flexibility of GLMs in R (even with base R) get most of the work done for the ecological data we work with within the research team. We prefer to leave data untransformed and simple when possible and use the family or offset arguments within GLMs to address data issues.

(2) Data structure is a new concept to us. We have come to appreciate that there are both individual and population-level queries associated with many of the datasets we have collected.  For our purposes, data structure is defined as the level that the dplyr::group_by to tally or count frequencies is applied. If the ecological purpose of the experiment was defined as the population response to a treatment for instance, the population becomes the sample unit – not the individual organism – and summarised as such. It is critical to match the structure of data wrangled to the purpose of the experiment to be able to fit appropriate models. Higher-order data structures can reduce the likelihood of nested, oversampled, or pseudoreplicated model fitting.

(3) Know thy data and experiment. It is easy to get lost in model fitting and dive deep into unduly complex models. There are tools before model fitting that can prime you for better, more elegant model fits.


  1. Wrangle then data viz.
  2. Library(fitdistrplus) to explore distributions.
  3. Select data structure.
  4. Fit models.

Now, specific to topic of AIC scores for quasi-family field studies.

We recently selected quasipoisson for the family to model frequency and count data (for individual-data structures). This addressed overdispersion issues within the data. AIC scores are best used for understanding prediction not description, and logic and exploration of distributions, CDF plots, and examination of the deviance (i.e. not be more than double the degrees of freedom) framed the data and model contexts. To contrast poisson to quasipoisson for prediction, i.e. would the animals respond differently to the treatments/factors within the experiment, we used the following #rstats solutions.



#deviance calc

dfun <- function(object) {

with(object,sum((weights * residuals^2)[weights > 0])/df.residual)


#reuses AIC from poisson family estimation

x.quasipoisson <- function(…) {

res <- quasipoisson(…)

res$aic <- poisson(…)$aic



#AIC package that provided most intuitive solution set####


m <- update(m,family=”x.quasipoisson”,

m1 <- dredge(m,rank=”QAIC”, chat=dfun(m))


#repeat as needed to contrast different models



This #rstats opportunity generated a lot of positive discussion on data structures, how we use AIC scores, and how to estimate fit for at least this quasi-family model set in as few lines of code as possible.


  1. An R vignette by Ben Bolker of quasi solutions.
  2. An Ecology article on quasi-possion versus nb.regression for overdispersed count data.
  3. A StatsExchange discussion on AIC scores.


Same data, different structure, lead to different models. Quasipoisson a reasonable solution for overdispersed count and frequency animal ecology data. AIC scores are a bit of work, but not extensive code, to extract. AIC scores provide a useful insight into predictive capacities if the purpose is individual-level prediction of count/frequency to treatments.


A rule-of-thumb for chi-squared tests in systematic reviews


A chi-squared test with few observations is not a super powerful statistical test (note, apparently termed both chi-square and chi-squared test depending on the discipline and source). Nonetheless, this test useful in systematic reviews to confirm whether observed patterns in the frequency of study of a particular dimension for a topic are statistically different (at least according to about 4/10 referees I have encountered). Not as a vote-counting tool but as a means for the referees and readers of the review to assess whether the counts of approaches, places, species, or some measure used in set of primary studies differed. The mistaken rule-of-thumb is that <5 counts per cell violates the assumptions of chi-squared test. However, this intriguing post reminds that it is not the observed value but the expected value that must be at least 5 (blog post on topic and statistical article describing assumption). I propose that this a reasonable and logical rule-of-thumb for some forms of scientific synthesis such as systematic reviews exploring patterns of research within a set of studies – not the strength of evidence or effect sizes.

An appropriate rule-of-thumb for when you should report a chi-squared test statistic in a systematic review is thus as follows.

When doing a systematic review that includes quantitative summaries of frequencies of various study dimensions, the total sample size of studies summarized (dividend) divided by the potential number of differences in the specific level tested (divisor) should be at least 5 (quotient). You are simply calculating whether the expected values can even reach 5 given your set of studies and the categorical analysis of the frequency of a specific study dimension for the study set applied during your review process.

total number of studies/number of levels contrasted for specific study set dimension >= 5

[In R, I used nrow(main dataframe)/nrow(frequency dataframe for dimension); however, it was a bit clunky. You could use the ‘length’ function or write a new function and use a ‘for loop’ for all factors you are likely to test].

Statistical assumptions aside, it is also reasonable to propose that a practical rule-of-thumb for literature syntheses (systematic reviews and meta-analyses) requires at least 5 studies completed that test each specific level of the factor or attribute summarized.


For example, my colleagues and I were recently doing a systematic review that captured a total of 49 independent primary studies (GitHub repo). We wanted to report frequencies that the specific topic differed in how it was tested by the specific hypothesis (as listed by primary authors), and there were a total of 7 different hypotheses tested within this set of studies.  The division rule-of-thumb for statistical reporting in a review was applied, 49/7 = 7, so we elected to report a chi-squared test in the Results of the manuscript.  Other interesting dimensions of study for the topic had many more levels such as country of study or taxa and violated this rule. In these instances, we simply reported the frequencies in the Results that these aspects were studied without supporting statistics (or we used much simpler classification strategies). A systematic review is a form of formalized synthesis in ecology, and these syntheses typically do not include effect size measure estimates in ecology (other disciplines use the term systematic review interchangeably with meta-analysis, we do not do so in ecology). For these more descriptive review formats, this rule seems appropriate for describing differences in the synthesis of a set studies topologically, i.e. summarizing information about the set of studies, like the meta-data of the data but not the primary data (here is the GitHub repo we used for the specific systematic review that lead to this rule for our team). This fuzzy rule lead to a more interesting general insight. An overly detailed approach to the synthesis of a set of studies likely defeats the purpose of the synthesis.

Rules of thumb for better #openscience and transparent #collaboration

Rules-of-thumb for reuse of data and plots
1. If you use unpublished data from someone else, even if they are done with it, invite them to be a co-author.
2. If you use a published dataset, at the minimum contact authors, and depending on the purpose of the reuse, consider inviting them to become a co-author. Check licensing.
3. If you use plots initiated by another but in a significantly different way/for a novel purpose, invite them to be co-author (within a reasonable timeframe).
4. If you reuse the experimental plots for the exact same purpose, offer the person that set it up ‘right of first refusal’ as first author (within a fair period of time such as 1-2 years, see next rule).
5. If adding the same data to an experiment, first authorship can shift to more recent researchers that do significant work because the purpose shifts from short to long-term ecology.  Prof Turkington (my PhD mentor) used this model for his Kluane plots.  He surveyed for many years and always invited primary researchers to be co-authors but not first.  They often declined after a few years.
6. Set a reasonable authorship embargo to give researchers that have graduated/changed focus of profession a generous chance to be first authors on papers.  This can vary from 8 months to a year or more depending on how critical it is to share the research publicly.  Development pressures, climate change, and extinctions wait for no one sadly.
Rules-of-thumb for collaborative writing
1. Write first draft.
2. Share this draft with all potential first authors so that they can see what they would be joining.
3. Offer co-authorship to everyone that appropriately contributed at this juncture and populate the authorship list as firmly as possible.
4. Potential co-authors are invited to refuse authorship but err on the side of generosity with invitations.
5. Do revisions in serial not parallel.  The story and flow gets unduly challenging for everyone when track changes are layered.